Use of AI in Simployer One HRM

Simployer is offering AI-Enabled HR Features in our SIA chatbot. The SIA chatbot makes interaction with the HRM system easier and more natural as end users can write questions and get answers based on the companys own written routines and processes stored in the system. The SIA Chatbot respects the access control and roles implemented in Simployer One.

The information the chatbot is basing its answers from, is the content in the Customers own tenant – which is also the limitation. It will not index, process or use information outside of your tenant.  

Certain functions in the HR platform may be classified as high-risk AI (e.g., analytics and recommendation features) according to the AI-Act.

The SIA Chatbot can be deactivated for your tenant if required. A person with the "owner" role for the tenant (the Customer) may request this by creating a support ticket in https://support.simployer.com

 

Transparency and instructions

The interface and design of the Simployer chatbot clearly indicate that the user is interacting with an AI system, meeting the “obvious from the context” requirement under Article 52 of the EU AI Act. Nevertheless, to increase clarity and trust, we also display a visible label stating “AI assistant” in proximity to the chat interface.

Simployer provides clear and accessible “Instructions for Use” for all AI-enabled features. These instructions specify the intended purpose of each feature, data limitations, conditions of use, and the requirement for human review for all HR-relevant outputs. The instructions are made available directly within the application and through the Simployer Trust Center to ensure full transparency for all deployers and end users.

 

Where is the information used and stored?

The Simployer One HRM system is comprised of modules, and customers can have a mix of modules depending on what the customer has bought. To understand where the information is used and stored the following definitions are important: 

  • Each module has a set of processing activities that defines which data in the HRM system that specific module processes.
  • Each module also has one or more sub-processors as defined in our Data processing Agreement. An example is Microsoft Azure, which is one of our professional hosting partners.
  • The geographic location of the sub-processors defines where the data is physically processed. An example is Amsterdam, where Microsoft Azure has its West Europe data center.

To understand:

 

What AI models and infrastructure we use?

Simployer’s AI functionality is built on Microsoft Azure OpenAI Services hosted exclusively within the EU region. We use foundation models provided by Microsoft, such as GPT-based large language models, operating fully within Azure’s secure and compliant infrastructure. Customer data is never used to train, fine-tune, or modify these models. Simployer does not operate or host its own AI model training infrastructure; all model inference is performed using Microsoft’s enterprise-grade, EU-hosted AI services. In addition, vector search and retrieval functions use Qdrant, also hosted within the EU and managed under the same security and data protection standards as our core infrastructure.

 

How is the data classified?

The data is broadly classified as customer data, ensuring that the Customer retains ownership and all rights to the data, in accordance with existing agreements, including the data processing agreement.

Data is not used to train AI-models or for any other AI-related analysis. However, functionality is available for end-users to provide "thumbs up or thumbs down" feedback on the responses. This feedback is used solely by Simployer for statistical purposes to improve the solution.

 

Types of data the HR-related AI features may process

  • Employee master data (name, contact info, employment details)

  • Absence and leave information

  • Performance-related information

  • Competence and skills data

  • Salary and compensation data

  • Text input provided by users

 

Compliance alignment

The data processed by AI is governed by:

  • GDPR (data minimization, accuracy, purpose limitation)

  • EU AI Act obligations for high-risk HR systems

  • Simployer’s internal Information Security Management System (ISMS) procedures (ISO/IEC 27001 alignment)

 

AI Risk Assessment

Some AI-enabled features within the Simployer HR platform relate to HR processes and therefore fall within the scope of Annex III “employment, workers’ management and access to self-employment” of the EU AI Act. This Annex describes the types of use cases that, if implemented as stand-alone AI systems making or determining employment decisions, may be considered “high-risk”.
Simployer is not a “high-risk AI provider” under Annex III(4) because our AI functionality is not used to make nor determine employment decisions and does not automate or replace managerial judgement. Instead, our AI features operate strictly as assistive, advisory, or information-retrieval tools. All outputs require explicit human review, validation, and decision-making by the customer. This places our AI within the category of non-automated, low-impact decision-support, which is not classified as high-risk under the final EU AI Act.

Where AI features assist HR professionals with insights or suggestions, they are designed with built-in human oversight, explainability, and disclaimers. Simployer therefore does not fall under the regulatory obligations of high-risk AI providers (Articles 9–33) but instead complies with the applicable transparency and user-information obligations (Article 52).

 

Mapping of AI usecases and data categories

#

AI use case in Simployer products

Main personal data categories processed by AI

GDPR role & typical lawful basis*

AI Act risk category (Simployer view)

Simployer AI risk category (Simployer view)

Key controls

1

Handbooks AI – Q&A on HR/employee handbooks

Question text (may incidentally contain personal data), user identity (via auth), access rights; handbook content itself

Customer as controller: Art. 6(1)(b) (employment contract) and/or 6(1)(f) (legitimate interest – provide HR info to staff). Simployer as processor for the Q&A transaction.

Limited-risk AI (conversational assistant that does not itself make HR decisions)

Minimal risk AI

No storage of Q&A by default, only transient processing; only users with existing permissions can retrieve content; no training of models on customer data.

2

Handbooks AI – feedback logging (service improvement)

Question asked, AI response text, language, company name. Designed not to include personal data.

Simployer as controller: Art. 6(1)(f) GDPR – legitimate interest in improving the AI service and troubleshooting; user feedback is voluntary and narrowly scoped.

Minimal / limited-risk AI

Minimal risk AI

Access limited to authorized staff; not used to train the foundation model.

3

HRM AI features – HR analytics & recommendations (e.g. performance, comp, equal pay, succession)

Employee master data (name, position, org unit), competence/skills, performance assessments, goals, salary & benefits, absence/leave, possibly sick leave, and career/succession info (depending on modules used).

Customer as controller: normally Art. 6(1)(b) (employment contract) and/or 6(1)(c) (labour law obligations), with some analytics under 6(1)(f) (legitimate interest in fair and efficient HR). For health-related data (e.g. sick leave), Art. 9(2)(b) GDPR (employment & social protection) is typical.

Simployer as processor under DPA.

High-risk AI (Annex III, point 4 – Employment, workers’ management and access to self-employment.

Limited risk AI

AI only produces recommendations / insights; Simployer requires human oversight and disclaimers (“AI-generated suggestion – requires HR review”) and logs output + user confirmations in line with AI Act & GDPR (Art. 22).

4

Equal Pay AI analytics (within Compensation / Equal Pay modules)

Salary & benefits, position, grade/band, location, gender, FTE %, and other HR attributes used for pay-equity analysis.

Customer as controller: typically Art. 6(1)(c) (legal obligations for equal pay / anti-discrimination) and/or 6(1)(f) (legitimate interest in pay fairness & compliance). Potentially Art. 9(2)(b)/(g) if special categories are included in equity analysis. Simployer as processor under DPA.

High-risk AI (Annex III, point 4 – Employment, workers’ management and access to self-employment.

Limited risk AI

Human review of outputs required; bias testing and documentation captured in AI Risk Register and ISMS.

5

Employee surveys / pulse analytics – if/when AI features are applied

Contact info, profile data, device info; pulse responses are stored anonymously and cannot be linked to an individual.

Customer as controller: Art. 6(1)(b) (employment contract – engagement program as part of HR) and/or 6(1)(f) (legitimate interest in measuring engagement). As answers are anonymised, GDPR risk is reduced for analytics.

Simployer as processor under DPA.

Likely limited-risk AI (analytics/insights on anonymised or aggregated data, not direct automated decisions about individuals).

Minimal risk AI

Anonymisation and minimum group sizes are key controls; no linkage back to individuals; if AI is added, it should operate only on aggregated datasets.

6

Whistleblower (Employee Surveys) – if AI is used for triage or summarisation

Free-text whistleblower reports, which may contain personal data about reporter and others, including possible special categories or offence data.

Customer as controller: Typically Art. 6(1)(c) (legal obligation under whistleblowing legislation) and/or Art. 6(1)(f) (legitimate interest in investigating serious incidents). For special categories, Art. 9(2)(b)/(f)/(g); for offence data, local law basis. Simployer as processor under DPA.

If AI is only used for summarisation/triage with human in the loop, this is likely limited-risk; if AI outputs materially drive disciplinary decisions, it may shift towards high-risk for employment.

Limited risk AI

Strong access control & confidentiality required; AI should be used as support only, with clear human oversight and no fully automated disciplinary outcomes (Art. 22 GDPR).

7

Platform-level AI logs & monitoring (for all AI features)

Technical metadata, timestamps, service metrics, error logs; may include pseudonymous IDs but not content where that’s explicitly excluded (e.g., normal Handbooks AI chats).

Simployer as controller: Art. 6(1)(f) – legitimate interest in security, monitoring, and continuous improvement of services.

Minimal-risk AI / non-AI (this is more about logging the system than AI per se)

Minimal risk AI

Logs are stored in EU; limited access; tamper protected; integrated with ISMS monitoring and post-market AI incident tracking.

How can we help?

We’re here for every step of your employee journey. From intuitive software for people management to hands-on learning programs and expert support from our legal team — we've got you covered.

Vector Get HR news straight to your inbox

Stay updated on HR, leadership, and work life. Choose between our Norwegian and Swedish newsletters.
Get HR updates

Vector Need a hand? We’re here to help!

Find FAQs, release notes, and more in our Support Center. We're here for you!
Go to support