Abstract: People with just a bit of interest in technology may not have heard about IoT, but most of them definitely heard or even own Smart Home Appliances like WeMo switches and lights, Philips Hue light bulbs, Echo and Alexa or Google Assistant. For those who like to get their hands ‘dirty’ with a bit of Linux commands and python code, there is RaspberryPi and HomeAssistant, which open the world of connected things to not only devices created by industry giants and their partners, but to virtually anything that can talk WiFi or radio.
Learn about HomeAssistant, integrating MQTT sensors, and creating your own custom components (like a home-made WiFi RGB lamp you can set to any color to match your mood).
Abstract: We go through different approaches to chatbots development. The workshop consists of a few Jupyter notebooks with code samples. We show and compare a few examples of neural networks architectures that can be used for chatbots. Based on our experience and presented architectures we give some advice on which architecture should be used where.
Basic Python knowledge
Abstract: We will talk about what AR means and we will guide you step by step in order to build your first Augmented Reality mobile app using Unity and Vuforia. We will also share with you a summary of the information received by us in the Augmented World Expo, biggest AR event in Europe this year.
Requirements: Please install on your laptop Unity free edition and create a Vuforia developer account.
Abstract: This workshop is a collection of mini-lessons in data mining. We will present the Do’s and Don’ts of the trade through a series of interactive exercises mimicking challenges encountered in your daily work. Topics include:
• What is success in data mining? Who does it best?
• The single absolute must-have in your data mining toolkit
• How to kick off a mining project – tips and pitfalls to consider
• Descriptive analysis: why slicing/dicing data cubes is the worst idea, like, ever.
• Multivariate analysis: how to add transparency to your black-box algorithm
• The 7 keys to a powerful visualization
• How to boost your product visibility through data marketing
Our goal is to help participants understand what they should start / stop doing to maximize the value derived from their data.
Abstract: Learn how world leading Enterprise Software companies foster the new way of working. Does remote work really work? How do companies encourage proficiency and performance? Does it pay off for the experts, too? What about the motivation behind large teams of globally distributed experts? These questions and many more will be openly answered during what is expected to be a challenging and constructive discussion on the topic of future of work
Abstract: In the first part we will do text pre processing techniques, TF-IDF, ngrams, simple ML classifiers. In the second part of the workshop we will play with word embeddings, paragraph to vec and language models. Hands-on, practical, interactive so that the students leave with a good understanding of concepts and techniques that they can later apply in their own projects.
Python virtualenv or Anaconda
packages: scikit learn, pandas
GPU drivers (CUDA + CUDNN) if applicable
tensorflow, keras, glove word vectors / gensim
Abstract: From initial scoping to deployment, Victoria will share both the visible and invisible tools and frameworks for navigating enterprise clients for a successful project outcome.
Abstract: Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy Machine Learning and Deep Learning models at any scale. In this session, we’ll introduce you to the service and we’ll run Python notebooks solving problems with both built-in algorithms (XGBoost, K-Means, etc.) and your own custom code (TensorFlow, Keras, etc.).
Abstract: will look into the growing challenges faced when programming heterogeneous computers and look forward to the solutions that developers can use in the future.
Abstract: Deep Learning (DL) has demonstrated undeniable successes in object detection, language translation, video games, etc. However, DL is simply classical machine learning and statistical methods powered by GPU acceleration. DL networks only compute what they have been programmed or trained earlier, fail in a way that humans will never do, and are not able to learn from errors. The dangers represented by adversarial attacks create serious problems to self-driving vehicles or in cybersecurity, which can be partially recovered by Bayesian DL to model uncertainty. However, for the complex use cases we are confronted today, we need to move towards a Human-Level Artificial Intelligence (HLAI). HLAI is goal oriented, improves continuously from errors, provides explainability, and learns how-to-learn from few examples for acquiring a generalized knowledge to new situations. In this talk, I will present promising routes to overcome these challenges based on some advanced cognitive architectures.
Abstract: Investment firms within private equity and venture capital have traditionally relied on humans doing the legwork, knowing the right people, trusting their instincts and crunching data with spreadsheets. Daniel will tell the story of EQT and Motherbrain. With a startup mentality, a small team, the right tools, and data, EQT has been able to create data-driven approach using analytics and machine learning to augment the investment teams.
Abstract: As technology enables us to be more interactive with our environment, we also need to be able to understand any form of interaction, most importantly speech and text. The talk consists of a walk-through word embeddings, knowledge graphs and language models and how we can use them.
Abstract: Increasing pressure of regulation authorities and the need of risk mitigation broaden the space for applications of artificial intelligence even in the conservative environment of banks. Traditionally, banks have been employing simple methods and spending enormous resources to run their customers through anti-money laundering screenings. Using machine learning and natural language processing, we can improve screenings in both time efficiency and quality… helping the fight against evil villains. The talk presents how our ML models identify (not only) criminals, tells the story of our deep learning models and shows the challenges that are in front of us.
Abstract: omni:us provides AI-powered document processing services for the insurance industry in order to fuel the digital transformation and make existing workflows more transparent, affordable and efficient. Handwritten forms are an important part in a large variety of insurance use cases such as claims. Extracting information from forms is a well-defined but challenging task due to multiple form versions, varying scan and photo quality, handwriting from unknown writers and a large amount of entities per form. This talk will provide an overview of our processing chain including page classification, template alignment and handwritten text recognition and discuss our key findings from various real-world projects.
Abstract: Realistic music generation is a challenging task. When machine learning is used to build generative models of music, typically high-level representations such as scores, piano rolls or MIDI sequences are used that abstract away the idiosyncrasies of a particular performance. But these nuances are very important for our perception of musicality and realism, so we embark on modelling music in the raw audio domain. I will discuss some of the advantages and disadvantages of this approach, and the challenges it entails.
Abstract: We live in a time when information about most of our movements and actions is collected and stored in real time. The availability of large-scale mobile phone, credit card, browsing history, etc data dramatically increase our capacity to understand and potentially affect the behavior of individuals and collectives.
The use of this data, however, raise legitimate privacy concerns. In this talk, I will discuss how traditional data protection mechanisms fail to protect people’s privacy in the age of big data. More specifically, I will show how the mere absence of obvious identifiers such as name or phone number or the addition of noise are not enough to prevent re-identification and how sensitive information can often be inferred from seemingly innocuous data. I will discuss some of our recent work on privacy in networked environments, and the development of an attack on a commercial privacy-preserving software. I will then conclude by discussing some of socially positive uses of big data and solutions we are developing at Imperial College to allow large-scale behavioral data to be used while giving individual strong privacy guarantees.
Abstract: Cybersecurity has undergone a radical change in recent years. It has morphed from a field in which rule-based applications are the norm to one that is dependent on processing enormous datasets. The way we relate to technology is also changing. We trust it with our most prized possessions: our personal and business information.
In a world dominated by software-only solutions, CyberSwarm has decided to challenge the traditional approach and develop a dedicated cybersecurity technology based on hardware. For efficient security, one needs to deny attackers the first move.
Abstract: The entertainment business in the area of Festivals has grown tremendously over the last years. This big business of (typically) young DJ’s, management agencies, and festival entrepreneurs is a serious business ecosystem with Events, Merchandise, Music Streaming, Sponsorships and royalties.
The competition for Fans and their engagement, content consumption and spending is huge. In this session, Edwin will share how their technology on Data and Intelligence provides differentiating value in strategies of Festival owners and Artists.
Abstract: Almost all projects in artificial intelligence today are research projects, or prototyping. But almost all organizations fail to bring AI beyond that point, into real-world applications that actually change the world for the better. Mr Erlandsson will focus this session on the key steps that need to be in place to develop products and services powered by AI (deep learning and neural networks) at speed and scale, while simultaneously making sure they are maintainable and suitable for use in real life applications.