Data literacy for Aalto community
Start your journey towards data-driven way of working already today!
Artificial Intelligence (AI) tools that use generative AI, such as Aalto AI Assistant, support and expedite the execution of many work tasks. AI enables us to enhance work efficiency and conduct new experiments. At Aalto University, we encourage employees to experiment with and familiarize themselves with the use of AI and the opportunities it offers in their work.
The use of AI requires understanding of the restrictions and considerations related to its use. The aim of this guideline designed for service personnel is to help you learn the rules for using generative AI at Aalto University.
Guidance for the use of AI in research and teaching and studying are provided on the following pages:
AI literacy is a new and important skill that all of us must learn to keep up with the changes enabled by AI and other new technological solutions. Aalto University offers its employees (and students) training opportunities and materials to develop AI and data literacy. These materials are collected on this webpage: Data literacy for Aalto community | Aalto University.
You can begin to develop your AI literacy with tips from website AI in Aalto | Aalto University.
Start your journey towards data-driven way of working already today!
Information on Aalto University's internal productized AI services.
Use openly, transparently, and responsibly AI tools. Keep in mind the values of Aalto University – responsibility, courage, and collaboration – as well as ethical principles.
The ethical and responsible use of generative AI means, for example, always verifying the correctness of the AI-generated outputs as AI might hallucinate and provide incorrect information. Do not use an output (such as code, data, translations) generated by generative AI if you do not understand what it does or what the output contains and means.
The outputs generated by generative AI is determined by the material used in the AI training, the logic and algorithms of the AI system, the input data, the framing of questions and the context provided. Checklist for implementing AI-generated output:
Please use only AI tools approved by Aalto University and follow the instructions provided for the use. The AI tools available at Aalto University can be found on the AI Services page:
AI can particularly enhance the handling of routine tasks. You can use generative AI in work tasks such as:
Ideation and sketching: In ideation and sketching of news, educational materials or other literary productions and in programming. AI can also help find new perspectives.
Summarizing and translating: AI enables quick translation of texts into another language and the creation of summaries. It is also handy for video captioning.
Enhancing text: AI can aid in proofreading texts or improving readability. It also allows you to easily edit the style of the text.
Analyzing data sets: AI can also be a useful tool for analyzing large data sets. However, when performing analytics, be mindful of data protection requirements. For example, respondents to a survey must be informed about the use of AI in the privacy notice if AI is used to analyze responses.
Tips for prompt structuring and using AI in work can be found on the Work with generative AI page:
Generative AI may not always be the right tool to use. Instead, another tool or technical solution might be better suited, such as software robotics. Aalto University IT services can help you choose the appropriate tool or solution for your purpose. You can start by familiarizing yourself with the webpage 5 questions before acquiring software or digital services | Aalto University.
AI is not suitable for all purposes, as the EU Artificial Intelligence Act (AI Act) prohibits the use of AI in certain cases.
AI must not be used for: harmful manipulation, inferring emotions of a natural persons in workplaces and education institutions, exploiting vulnerabilities of a natural person or specific group of persons (e.g. people with disabilities), social scoring, crime prediction, creating or expanding facial recognition databases through the untargeted scraping of facial images from the Internet or CCTV footage, biometric categorization based on sensitive personal data, or ‘real-time’ remote biometric identification in public spaces (law enforcement).
Inferring emotions based on biometric data of employees, job applicants, students, and those applying for study also also prohibited.
Some AI use cases (e.g. in education and human resources) contain specific risks. Therefore, the AI Act imposes additional requirements for deployment in these cases. For more detailed information on high-risk uses can be found in section 5.2.1 of this guideline. When AI use is defined as high-risk according to the AI Act, such usage is not permitted unless the requirements related to risk assessment and documentation are met.
If the AI system is used only as “supportive AI”, where AI does not make automatic decisions related to natural persons but instead a human is responsible for the final judgment and decision-making, it is not high-risk use. If it is not high-risk use, the requirements for high-risk use do not need to be met. Examples of “supportive AI” usage:
The most likely high-risk use cases to encounter in Aalto University’s operations are*
Education and student selection | ||
a) | Student selection: For example, AI would be used to assess which candidates should be selected to study at Aalto University. | |
b) | Using AI to assess learning outcomes (also when learning outcomes are utilized to guide the learning process): For example, automatic grading of a course exam or using AI to predict which students might drop out of studies. | |
c) | AI systems intended to be used to assess a person’s suitable educational level or access to further education. | |
d) | Using AI for monitoring students and detecting prohibited behavior during exams: For example, an AI system supervising remote exams. | |
Recruitment and employment matters | ||
a) | Using AI in recruitment: AI systems designed for use in recruiting or selecting natural persons, especially for targeted job advertisements, analyzing and filtering applications, and assessing applicants. For example, a system automatically rejecting applications based on certain parameters. For example, rejecting all applicants over 40 years of age when searching for a junior-level expert or targeting job advertisements only to applicants below a certain age. | |
b) | Using AI in determining terms of employment, career progression and termination of employment: AI systems designed for use in decision-making on the terms of employment, career progression, and the termination of employment contracts, task assignment based on individual behavior or personality traits, or monitoring and assessing the performance and behavior of individuals in such relationships. |
*Example cases have not gone through the approval process described in section 5.2.
Additionally, remote biometric identification, profiling, and inferring emotions are classified as high-risk uses by the AI Act.
The AI Act also classifies other high-risk use cases, which are unlikely to be encountered in the operations of Aalto University. Such use cases include access and use of essential private services and essential public services and benefits, high-risk use in law enforcement and migration, and use related to administration of justice and democratic processes. There is also high-risk use related to product safety – using AI systems as a safety component in certain products or if the AI system itself is a product that is required to undergo a third-party conformity assessment. An example of this category is autonomous vehicles.
When using generative AI, the classification of information input into the AI system must be evaluated whether the input contains information that is subject to specific requirements. You can find the guidelines here: Classification of information: basic instructions and services | Aalto University
Generative AI processes information presented as questions, text, audio, or image material. Do not input personal data or non-public information into the AI system unless Aalto University has specifically approved the service for internal, confidential, or secret information. You can find system-specific instructions here: Classification of information: basic instructions and services | Aalto University (select "Information classification guide (information systems and digital services)”)
The General Data Protection Regulation (GDPR) also applies to the use of AI. If a person can be identified directly or indirectly, the information is personal data. The processing of personal data should always be minimized. Consider whether you could remove personal data before inputting the material into the AI tool. If removing personal data is not possible, the data should be pseudonymized. Pseudonymization means removing or replacing identifiers from personal data with other identifiers so that the person cannot be directly identified without additional information (e.g. name replaced with identifier H001). The additional information should be stored separately from the personal data. Remember that pseudonymized data is still personal data, as the person can be identified if the additional information (code key) is available. Also, remember that as a user, you are responsible for the data inputted into AI and any personal data contained in the outputs generated by AI, as well as sharing or utilization them.
Aalto University’s data protection guidelines are available in the Data protection hub:
A key question for the user of an AI system is the right to use works as input data in the AI system and the obligation to disclose the output of the AI system.
If a work is used as input material in an AI system and it remains in the AI system for future use, the permission of the rights holder is required. The user must check the AI system’s terms of use and remove the used input material from the training data unless the work is
Outputs generated by AI do not have a copyright protection, as these are not considered original works. AI-generated outputs can only be protected by contract terms.
An output produced by user prompts can infringe copyright if the AI generates a modification or direct copy of a copyright-protected work that has been used as training data or input for the AI model.
Before using the AI-generated outputs, assess the potential risk of copyright infringement. The used sources can be requested from the AI system for a case-by-case risk assessment. However, copyright law allows the use of another’s work, e.g. by right of quotation, where excerpts from a published work can be used in a manner required by proper usage and to the necessary for the purpose.
To reduce the risk of copyright infringement, users should avoid requesting AI to generate outputs that resemble a copyright-protected work. The terms of use of the AI system must also be taken into consideration.
Detailed guidelines on AI and copyright can be found here: Artificial intelligence and copyright | Aalto University. The guidelines cover, among other things, referencing practices for outputs generated by AI.