What’s wrong with information security in AI?
Businesses are increasingly using AI technologies and large language models (LLMs) for text analysis and generation. These include customer service chatbots, neural network-based applications, and even algorithms for testing cyberattacks. According to consulting firm Gartner, by 2026, AI will be used in the development of 70% of mobile versions of IT products. However, training algorithms with minimal human oversight requires large amounts of confidential corporate data. Meanwhile, cybercriminals are also arming themselves with new technologies and generating threats even faster. Conventional information security tools are ineffective in protecting AI systems due to their unique characteristics. As a result, these algorithms have become an attractive target for hackers.
Specialists in the field of AI have only started focusing on improving security methods in the last two years. Nearly all of the information in the nascent field is published in English and based on the realities of the English-speaking world; and there are currently no “textbooks” that could assist absolute beginners. Additionally, this discipline has yet to make it into universities. That’s why specialists in AI infosec are forced to rely on trial and error, developing security standards by themselves.
As a solution to this problem, ITMO University and the company Raft have launched LLM Security Lab – a project that is meant to create an expert community in AI security and train specialists in this field. The initiative is implemented at AI Talent Hub, the Master’s program in machine learning by ITMO and Napoleon IT.
The new lab’s tasks
Within the lab, students, supervised by experts, will be developing scientific and applied projects along the following trajectories:
- Content monitoring and analysis. Developing tools for tracking attacks and sentiment analysis inside user networks.
- Cyberattack imitation and personal data protection. Researching new types of attacks, developing solutions to prevent leaks of confidential information and ensure AI product security.
The lab’s members will also join the course LLM Security that covers relevant issues in AI ethics and security. Among the course’s topics are the development of monitoring tools and identification of attacks on LLM apps, testing and comparison of ethical characteristics of LLMs, stripping data from sensitive information before feeding it to algorithms, audit, and protection of AI business solutions.
“Students will be working with real-life commercial cases. We will be auditing apps, cleaning personal data, and analyzing data for major vendors. These cases will be provided by Raft from the company’s clients. This means that the practical tasks in our course will be based on the industry’s demands. This way, students will learn to solve real-world business tasks in AI security,” shares Evgeniy Kokuykin, the course’s developer, founder of the lab, and head of AI products at Raft.
In the course, students will also be able to prepare texts for publication in topical journals or presentation at IT conferences. For instance, Danil Kapustin, who completed the course before it was implemented by the new lab, presented his talk on prompt injections and low-resource attacks at GigaConf. Other participants of the course gave talks at the conferences Saint HighLoad++ and Offzone. This year, the course is slated to start in October.
Lab’s lecturers and experts
The lab is headed by Evgeniy Kokuykin, who managed development teams at Microsoft, Google, and Evernote, and now heads the AI development department at Raft. Among the lecturers are also Dmitry Botov, the head of ITMO’s educational program Artificial Intelligence; Anton Belousov, the head of development at Raft with experience of working at infosec startups in collaboration with Bosch, Mastercard, Ernst & Young, and Deutsche Bank; and Irina Nilokaeva, the head of R&D at Raft. The team is planning to invite guest lecturers from Russian IT leaders: Aytri, EGGHEADS, WiseAdvice, and Positive Technologies.
Who can join the lab?
Meetings with professors and experts take place online. Therefore, it is possible to listen to lectures, complete assignments, and do research from anywhere in the world. In addition, students from any Russian university can join the project – it is not necessary to be a Master’s student at ITMO.
Participants should meet the following requirements:
- an interest in infosec applied in AI, as well as in applied and scientific projects in infosec;
- understanding of the principles of LLMs and web app architecture;
- knowledge of Python at the level enough to create custom functions, classes, and scripts;
- experience in data preparation, LLM training, and working with technical literature in English is desirable.