Generative AI describes algorithms that can create new content such as text, images, music, computer code or other media in response to prompts. Employees are responsible for exercising good judgment regarding the appropriate use of GenAI. This use must be ethical, legal, and aligned with our company values.
Do & Don’t
Do take an interest in AI and think about how it could improve your work, personal life and even that of the wider Loganair community. Educate yourself on how to use the technology safely by knowing our approved GenAI tools and following the guidance in this document.
Do be transparent about when you are using GenAI to produce content. This doesn’t mean declaring its use in an email that GenAI helped you write, but it should be made clear on public content such as marketing and presentations. If you cannot reveal your use of GenAI for the task, then do not use it.
Don’t enter confidential information such as Loganair’s customer records, sensitive commercial or technical information our customers share with us into publicly available applications like ChatGPT. Remember Loganair data must stay on Loganair systems.
Don’t use GenAI-produced content without taking the time to understand and check it. GenAI essentially works by predicting the most likely answer, which can lead to incorrect answers. Read about “Hallucinations” in the next section.
Key Concepts
Generative AI (GenAI): refers to an artificial intelligence technology that derives new versions of text, audio, or visual imagery from large bodies of data in response to user prompts. GenAI can be used in stand-alone applications, such as ChatGPT or Bard, or incorporated into other applications, such as Microsoft Bing or Microsoft Office Suite. If you have any questions about what constitutes GenAI, please contact the IT Helpdesk.
Large Language Models (LLMs): In the context of artificial intelligence and natural language processing, LLMs often refer to models that are trained on vast amounts of data to understand and generate human-like text. GPT-3 and GPT-4 are examples of LLMs.
Hallucinations: is a term to describe how GenAI can, at times, provide fictitious answers. The issue is not simply that the answers are wrong, it is that they are confident and convincing. Research has shown that, at times, we can favour suggestions from automated systems blindly, often ignoring our own better judgment.
Cybersecurity: The use of AI tools may introduce new opportunities for cyber-attacks. Hackers can manipulate LLMs to give away information they shouldn’t, including personal or sensitive information. They can also use LLMs to increase the speed and scale of existing attacks, such as phishing emails.
Confidentiality and Privacy: Information entered into GenAI applications could become part of an output elsewhere afterwards or be used to train new models. This is also important to consider when working with intellectual property as its loss can have a material impact on its owner. Our approved GenAI tools can be used without this concern though do remain conscious of the prohibited use cases.
Model Bias: GenAI tools incorporate any biases of the data sets that were used to train them. This modelling bias may mean generated content does not always align with Loganair’s values and our commitment to diversity, equity and inclusion. Be aware that information received from GenAI tools may contain systematic errors or favour certain groups, leading to unfair or discriminatory outcomes.
Transparency Risk: It is important to be transparent with the use of GenAI tools, so we are easily able to identify content generated using GenAI. Public facing content created with GenAI must be clearly identified, those that fail to disclose GenAI usage risk losing customers’ trust and risk breaching new and existing regulations.
Third-party Risk: Data sent to third parties, such as suppliers, could be used in the third party’s use of GenAI tools.
What GenAI Tools Can I Use?
Use of Loganair’s corporate GenAI tools, such as Bing Chat Enterprise and Microsoft Copilot is permitted with Loganair’s company data. Use of other GenAI tools is permitted if no personal or confidential data is sent to the system. If you are unsure whether an application can be used, please contact ITHelpdesk@loganair.co.uk.
What If I Want To Use Or Build A Different Tool?
We are excited about the prospects of how GenAI and other similar technologies could change Loganair and the world around us. As part of this we have created an AI Community of Excellence to help identify AI initiatives across the organisation, providing support to those involved while helping manage potential risks such as regulatory compliance and legal obligations. Our Head of IT is leading this initiative.
If you are working on a project involving AI you must declare it to the AI Community of Excellence. You can do this by emailing ITHelpdesk@loganair.co.uk and asking to be put in touch with a member of this team.
What Use Cases Are Prohibited?
While it is impossible to cover every scenario, the following should act as guidance.
- Making decisions that impact another individual solely using automated means.
- For legal guidance related to business decisions, or to draft legal documents.
- To process customer data as part of our managed service offerings, unless declared in the contract.
- Entering special category data as defined by the UK GDPR into GenAI tools.
- Using GenAI application to rate the performance of other employees.
Please report any prohibited uses of GenAI to the IT Helpdesk.
Monitoring and Conformance
Loganair reserves the right to access and monitor the use of GenAI applications on any company-issued devices or that appears on company managed networks to ensure compliant use of these systems. It is essential you understand that failure to comply with any company policy may lead to disciplinary procedures and / or legal proceedings.
Comments
0 comments
Article is closed for comments.