Use of AI in Simployer One HRM
Simployer is offering AI-Enabled HR Features in our SIA chatbot. The SIA chatbot makes interaction with the HRM system easier and more natural as end users can write questions and get answers based on the companys own written routines and processes stored in the system. The SIA Chatbot respects the access control and roles implemented in Simployer One.
The information the chatbot is basing its answers from, is the content in the Customers own tenant – which is also the limitation. It will not index, process or use information outside of your tenant.
Certain functions in the HR platform may be classified as high-risk AI (e.g., analytics and recommendation features) according to the AI-Act.
The SIA Chatbot can be deactivated for your tenant if required. A person with the "owner" role for the tenant (the Customer) may request this by creating a support ticket in https://support.simployer.com
Where is the information used and stored?
The Simployer One HRM system is comprised of modules, and customers can have a mix of modules depending on what the customer has bought. To understand where the information is used and stored the following definitions are important:
- Each module has a set of processing activities that defines which data in the HRM system that specific module processes.
- Each module also has one or more sub-processors as defined in our Data processing Agreement. An example is Microsoft Azure, which is one of our professional hosting partners.
- The geographic location of the sub-processors defines where the data is physically processed. An example is Amsterdam, where Microsoft Azure has its West Europe data center.
To understand:
- How AI is utilized within each module, visit the list of processing activities
- Where the data is stored, look up the module reference in the list of sub-processors
How is the data classified?
The data is broadly classified as customer data, ensuring that the Customer retains ownership and all rights to the data, in accordance with existing agreements, including the data processing agreement.
Data is not used to train AI-models or for any other AI-related analysis. However, functionality is available for end-users to provide "thumbs up or thumbs down" feedback on the responses. This feedback is used solely by Simployer for statistical purposes to improve the solution.
Types of data the HR-related AI features may process
-
Employee master data (name, contact info, employment details)
-
Absence and leave information
-
Performance-related information
-
Competence and skills data
-
Salary and compensation data
-
Text input provided by users
Compliance alignment
The data processed by AI is governed by:
-
GDPR (data minimization, accuracy, purpose limitation)
-
EU AI Act obligations for high-risk HR systems
-
Simployer’s internal Information Security Management System (ISMS) procedures (ISO/IEC 27001 alignment)
Mapping of AI usecases and data categories
|
# |
AI use case in Simployer products |
Main personal data categories processed by AI |
GDPR role & typical lawful basis* |
AI Act risk category (Simployer view) |
Key controls |
|---|---|---|---|---|---|
|
1 |
Handbooks AI – Q&A on HR/employee handbooks |
Question text (may incidentally contain personal data), user identity (via auth), access rights; handbook content itself |
Customer as controller: Art. 6(1)(b) (employment contract) and/or 6(1)(f) (legitimate interest – provide HR info to staff). Simployer as processor for the Q&A transaction. |
Limited-risk AI (conversational assistant that does not itself make HR decisions) |
No storage of Q&A by default, only transient processing; only users with existing permissions can retrieve content; no training of models on customer data. |
|
2 |
Handbooks AI – feedback logging (service improvement) |
Question asked, AI response text, language, company name. Designed not to include personal data. |
Simployer as controller: Art. 6(1)(f) GDPR – legitimate interest in improving the AI service and troubleshooting; user feedback is voluntary and narrowly scoped. |
Minimal / limited-risk AI |
Access limited to authorized staff; not used to train the foundation model. |
|
3 |
HRM AI features – HR analytics & recommendations (e.g. performance, comp, equal pay, succession) |
Employee master data (name, position, org unit), competence/skills, performance assessments, goals, salary & benefits, absence/leave, possibly sick leave, and career/succession info (depending on modules used). |
Customer as controller: normally Art. 6(1)(b) (employment contract) and/or 6(1)(c) (labour law obligations), with some analytics under 6(1)(f) (legitimate interest in fair and efficient HR). For health-related data (e.g. sick leave), Art. 9(2)(b) GDPR (employment & social protection) is typical. Simployer as processor under DPA. |
High-risk AI (Annex III, point 5 – employment/HR systems) where AI outputs can influence recruitment, promotion, pay, or termination. |
AI only produces recommendations / insights; Simployer requires human oversight and disclaimers (“AI-generated suggestion – requires HR review”) and logs output + user confirmations in line with AI Act & GDPR (Art. 22). |
|
4 |
Equal Pay AI analytics (within Compensation / Equal Pay modules) |
Salary & benefits, position, grade/band, location, gender, FTE %, and other HR attributes used for pay-equity analysis. |
Customer as controller: typically Art. 6(1)(c) (legal obligations for equal pay / anti-discrimination) and/or 6(1)(f) (legitimate interest in pay fairness & compliance). Potentially Art. 9(2)(b)/(g) if special categories are included in equity analysis. Simployer as processor under DPA. |
High-risk AI (employment conditions / equal treatment) if AI-driven outputs are used to support salary decisions or compliance reporting. |
Human review of outputs required; bias testing and documentation captured in AI Risk Register and ISMS. |
|
5 |
Employee surveys / pulse analytics – if/when AI features are applied |
Contact info, profile data, device info; pulse responses are stored anonymously and cannot be linked to an individual. |
Customer as controller: Art. 6(1)(b) (employment contract – engagement program as part of HR) and/or 6(1)(f) (legitimate interest in measuring engagement). As answers are anonymised, GDPR risk is reduced for analytics. Simployer as processor under DPA. |
Likely limited-risk AI (analytics/insights on anonymised or aggregated data, not direct automated decisions about individuals). |
Anonymisation and minimum group sizes are key controls; no linkage back to individuals; if AI is added, it should operate only on aggregated datasets. |
|
6 |
Whistleblower (Employee Surveys) – if AI is used for triage or summarisation |
Free-text whistleblower reports, which may contain personal data about reporter and others, including possible special categories or offence data. |
Customer as controller: Typically Art. 6(1)(c) (legal obligation under whistleblowing legislation) and/or Art. 6(1)(f) (legitimate interest in investigating serious incidents). For special categories, Art. 9(2)(b)/(f)/(g); for offence data, local law basis. Simployer as processor under DPA. |
If AI is only used for summarisation/triage with human in the loop, this is likely limited-risk; if AI outputs materially drive disciplinary decisions, it may shift towards high-risk for employment. |
Strong access control & confidentiality required; AI should be used as support only, with clear human oversight and no fully automated disciplinary outcomes (Art. 22 GDPR). |
|
7 |
Platform-level AI logs & monitoring (for all AI features) |
Technical metadata, timestamps, service metrics, error logs; may include pseudonymous IDs but not content where that’s explicitly excluded (e.g., normal Handbooks AI chats). |
Simployer as controller: Art. 6(1)(f) – legitimate interest in security, monitoring, and continuous improvement of services. |
Minimal-risk AI / non-AI (this is more about logging the system than AI per se) |
Logs are stored in EU; limited access; tamper protected; integrated with ISMS monitoring and post-market AI incident tracking. |