* All times are EET






Conference Stage
Conference Stage
08:30 - 09:00
09:00 - 11:00
Gabriel Stanciu, Teodor Niculescu, Eduard Cojocea (METRO.digital) – Introduction to Computer Vision. Convolutional Neural Networks for shelf volumetric estimation

Abstract: METRO.digital is the tech division of METRO group, developing products and solutions for METRO Cash and Carry stores. This year, we started to work on a prototype of Checkout-less Experience, the one you will have a chance to experiment with at our booth. 

In this workshop, however, we choose another business case that can apply the same principles in the limited time we have.  

We will present a practical introduction to Computer Vision. First, we will briefly go through some basic Machine Learning and Computer Vision concepts, and then we will implement some Convolutional Neural Networks to be able to make a volumetric estimation of how full shelves are in a store. Shelf volumetric estimation is critical for optimal store shelf replenishment. 

Prerequisites: Please bring your laptop and make sure you have a Google account with Google Colab access. This is the best option to be sure everything will work.

In case you don’t have a google account with google colab access, try the following local configuration.

python 3.7.15








11:00 - 11:15
Coffee Break
11:15 - 13:15
Alexandru Baila, Eugeniu Spinu (Endava) – Latest Issue from the Transformers Trilogy: GPT-3

Abstract: What is GPT-3 and why is everyone talking about it? Let’s explore some of its functionalities during the workshop and have fun.

Bring your laptop and make sure you have Python 3.9.0 installed, with the corresponding pip version, in order to be able to set up your local environment:
•            openai
•            pandas == 1.5.1
•            notebook
•            termcolor == 2.1.0
•            colorama == 0.4.6
•            transformers == 4.23.1

13:15 - 14:00
14:00 - 16:00
Adela Muresan (BT Code) – Object Detection in Real Time Videos

We will go together through the steps of building a custom object detector model using YOLO, from data labeling to model generation. Once such a model is built, it can be tested on existing videos, as well as on a real-time camera streaming setup. To exemplify the concept, we will build a model to detect the existence of a card in a video.

Prerequisites: Basic Python knowledge is a prerequisite, as the code is Python-based.

16:00 - 16:15
Coffee Break
16:15 - 18:15
Ioan Moldovan (Weavechain) – Web3 Data Sharing with Weavechain

Abstract: What is Web3 data and how can it change the current data flows? During the workshop we will do a data sharing exercise, diving into data lineage, monetization and confidential computing.

Prerequisites: Docker and a browser with the Metamask plugin

08:00 - 08:50
08:50 - 09:00
Opening Remarks
09:00 - 09:30
Cyrus Moazami-Vahid (Amazon Web Services) – AI Trends

Abstract: In this talk we ‘ll explore some of the present trends in AI, including Graph Neural Networks, AI Explain ability, and AutoML. We’ll explore where the market it, what use cases are helped to solve using these trends, and what Amazon Web Services had contributed.

09:30 - 10:00
Julien Simon (Hugging Face) – Hyperproductive Machine Learning with Transformers and Hugging Face

Abstract: According to the latest State of AI report, “transformers have emerged as a general-purpose architecture for ML. Not just for Natural Language Processing, but also Speech, Computer Vision or even protein structure prediction.” Indeed, the Transformer architecture has proven very efficient on a wide variety of Machine Learning tasks. But how can we keep up with the frantic pace of innovation? Do we really need expert skills to leverage these state-of-the-art models? Or is there a shorter path to creating business value in less time? In this code-level talk, we’ll show you how to quickly build and deploy machine learning applications based on Transformers models. Along the way, you’ll learn about the portfolio of open source and commercial Hugging Face solutions, and how they can help you deliver high-quality machine learning solutions faster than ever before.

10:00 - 10:30
Ekaterina Sirazitdinova (NVIDIA) – The Future of Robots and Perception Systems with Synthetic Data

Abstract: Developing perception systems and robots capable of sensing complex environments in a scalable way is a challenging task. Relying on AI, such applications become more intelligent, flexible, and robust, but AI training requires lots of data. Data collection and annotation are time consuming and expensive. Furthermore, in order to create models able to generalize well, data needs to be diverse and balanced. With advancement in simulation tools and generative models, more and more AI practitioners in computer vision start utilizing synthetic data as a possible alternative to real data. In this session, we will discuss the benefits and challenges of synthetic data and will share a typical workflow of synthetic data creation.

10:30 - 11:00
Bruno Kovacic (Superbet & Happening) – Risk in Sportsbooks, a Marriage Between Humans and Machines

The explosive growth of accessible data and the increase in betting complexity and volume inevitably leads to a sharp increase in risk-processing capacity requirements. Utilizing more risk analysts and traders to sweep over ever-growing data sources to identify suspicious behaviour does not scale well, shifting focus away from where they are best utilized – designing and pricing unique markets. Alternatively, machine learning is perfectly suited to scan every ticket to identify suspicious behaviour as it happens, while welcoming the use of large and multiple datasets – therefore, alleviating the burden on humans. Moreover, as it learns and grows, the AI might be capable of generalising and uncovering new patterns previously yet uncovered. In this presentation, we describe the tight connection between data scientists and the traders we are building in Superbet, which is essential both to building machine learning risk models, and to empowering them with the expert domain knowledge.

11:00 - 11:20
Coffee Break
11:20 - 11:50
Alessandro Festa (SmartCow) Building Tomorrow World with Digital Twins & Edge AI

Abstract: Digital twins and AI at the edge are no longer the “bleeding-edge technology” in the recent past. Multiple use cases are realized in life, and more advanced applications are on the way to deployment. In this session, SmartCow will discuss the composition of digital twins and edge AI devices, leveraging the leading data generation technology to enlarge edge IoT devices’ capabilities and build up the next-generation city smarter.

11:50 - 12:20
Fouzia Adjailia (University of Kosice) – Women in artificial intelligence: what’s it like to be a woman who builds machines?

Abstract: There are few women in artificial intelligence. In fact, according to Fortune,
only 10% of all ground-floor artificial intelligence (AI) jobs are filled by women.
This is startling when you consider that AI is the fastest-growing field in the
world and could have a huge impact on the direction our society takes.
What’s going on here? Is this just another typical issue women face with
venturing into a field that is predominantly populated by men? Or is there
more of a story there?

What is it like to be a woman in A.I.? The field of artificial intelligence has been
plagued by a lack of diversity: only 14% of attendees at the 2016 A.I. summit
were women, for example. But the problem is not that women aren’t entering
the field — we just aren’t staying. Of the women who started graduate
programs in computer science or engineering from 2004 to 2013, 60% had left
their programs or made plans to leave within six years, according to research
from Google and Carnegie Mellon University. For many women, this career
path just doesn’t seem inviting. In a survey commissioned by Google, 48% of
respondents said they had experienced an unwanted sexual advance, and
25% had received an explicit or implicit offer of sex for career advancement.
When asked why they believe women leave A.I., 57% of respondents said
“brogrammer culture.” These numbers are especially disappointing because
artificial intelligence is so critical to our future — AI will be responsible for
anywhere from $3 billion to $5 trillion in annual economic growth by 2025,
according to McKinsey Global Institute estimates . And it’s not just about
creating new business opportunities; AI can help solve some of the world’s
biggest problems, including climate change and disease control.

12:20 - 12:50
Moe Sani (Dyson) – Rapid Prototyping of Future Connected Products

Abstract: Connectivity is increasingly becoming a vital part of new devices, but the prototyping and development process for these products might involve some challenges.

Efficient development of connected products requires multidisciplinary teams of App, Cloud, Embedded, UI/UX and other engineers to closely collaborate on the same project while avoiding getting out of sync.

In this talk, I am going to present our success story during the development of Dyson’s first connected wearable product that was launched recently. I will share some of the challenges we were facing and some insights on how we facilitated the remote and connected collaboration of different global teams on the same project.

12:50 - 14:00
14:00 - 14:30
Soroush Seifi (Automotive Industry) – Visual Attention in Partially Observable Environments

Abstract: The deployment of common computer vision architectures is constrained by the input being fully observable and taken by a human photographer pointing the camera at meaningful subjects. In a large environment, an autonomous agent with a limited field of view/resource cannot fully observe the scene and its camera movements might not be consistent with those of a human photographer. Active visual exploration aims to assist such an agent to understand an environment by choosing the best viewing directions of the camera. As such, it observes the world as a sequence of narrow-field-of-view glimpses, reasons about the unseen parts and decides where to look next given the visual cues in its observations. In this talk we define the constraints of this problem in modern computer vision and propose solutions for training an agent to look at the most meaningful parts of a scene.

14:30 - 15:00
Tom Mason (Stability AI) – Generative AI Foundations
15:00 - 15:30
Sandra Kublik (Cohere) – Large Language Models Are Eating The World

Abstract:Large Language Models (LLMs), arguably the most important NLP advancement in recent years, are now taking the world of software by the storm. In this talk I will walk the participants through the key LLM concepts, use cases where LLMs show biggest promise, as well as discuss some of their existing real world applications.

15:30 - 16:00
Mihai Raneti (CyberSwarm) – Neuromorphic computing – The future of A.I.

Abstract: Neuromorphic computing is a new type of non-Von Neumann architecture which mimics the biological brains. Overcoming the von Neumann bottleneck, a neuromorphic computing models the way the brain works, and it is based on a novel electronic component called memristor – resistors with memory. While Von Neumann systems are largely serial, brains use massively parallel computing, and they are more fault-tolerant than current computers. Neuromorphic computing will fundamentally change how the software and hardware is developed, because of the integration between different elements in neuromorphic hardware. At CyberSwarm we are using this technology to pave the way towards next computing paradigm.

16:00 - 16:20
Coffee Break
16:20 - 16:50
Lucian Gheorghe (Nissan Motor Corporation) – Nissan Formula E – Brain To Performance – from Applied Neuroscience to an Electrifying Driving Experience

Abstract: A first in the world of motorsports, Nissan Brain to Performance uses advanced brain imaging and analysis to determine the anatomical specifics of high performance, professional drivers. The program aims to develop bespoke, optimized training to enhance the brain functions and anatomy related to driving and racing.

We studied how the Formula E drivers brains differ in comparison with ‘average drivers’. Next, we developed bespoke training protocols leveraging advancements in portable brain stimulation technologies. We will present the latest results of our training protocols and how we think we can further apply this technology to provide an Electrifying Driving Experience to all Nissan drivers.

16:50 - 17:20
Nadiya Shvai (Cyclope.ai) – Artificial Intelligence Based Solution for Video Incident Detection

Abstract: Real time incident detection and management is a very important task for safety of the roads, and, especially within the tunnels due to high risk factors and rapid response requirements. Nowadays, Video-based Automatic Traffic Incident Detection has become an integral part of Intelligent Transportation Systems (ITS). However, the development of highly accurate systems faces many challenges due to the environment which includes complex lighting and environmental conditions, camera positions, image quality, network bandwidth, tunnel’s road layouts and structures, etc. We present a new comprehensive and powerful system for tunnels to detect road incidents’. The proposed system leverages different Deep Neural Networks (DNN) based methods to perform object detection, semantic segmentation and classification tasks. Moreover, it integrates speed measurement and vehicle tracking for temporal and motion analysis. It is based on two distinct layers: (a) scene understanding and (b) incident detection. The high efficiency and effectiveness of the proposed system are validated by its current industrial use.

17:20 - 17:50
Cameron Cooke (Airbus) – Into the Metaverse – How Synthetic Data is Enabling the Future of Computer Vision

Abstract: At AIRBUS, we are constantly challenging ourselves to improve our manufacturing processes. Regardless of increasing production rates, ensuring quality or enabling new materials & processes, there are always new opportunities to support our operators with technology. Computer Vision allows us to bridge the gap between the digital world of what is designed and the analog world of what is manufactured on the shop floor.

However, with these opportunities come challenges. To deliver the full value of visual inspection, we need to be able to provide solutions that are both scalable and adaptable. Learn how we use graphical rendering and machine learning together to solve real problems on the shop floor.

17:50 - 18:00
Codiax Closing Ceremony
Select date to see events.