WORKSHOPS May 13, 2021
CONFERENCE May 14, 2021
WORKSHOPS May 13, 2021
CONFERENCE May 14, 2021
Abstract: Let’s dive into the implementation of cryptocurrency tokens and NFTs! During this workshop, we’ll do a hands on exploration of the most popular blockchain programming language – Solidity. We’ll write and run code on the mainnet of one of the supporting networks – the likes of Ethereum, Polygon and Avalanche. Our two use cases are going to be NFT deployment and programming new cryptocurrency tokens. No matter the languages that you are proficient in, Solidity has an easy to understand syntax so it is going to be a breeze to follow. Before that we’ll do a quick walk through generic blockchain technology concepts in order to understand how the infrastructure works before we run code on it. Afterwards we’ll talk about automated testing and the design patterns that are most suitable for programming on the blockchain. Solidity is an easy language to read, but the main challenge is to design your code with security in mind. You do not want to put at risk millions of dollars of value, so you have to be careful and adapt your development process to keep it under control. See you at the workshop!
Abstract: We’ll start with a real world challenge: matching journalists interested in writing a story with relevant domain experts. We’ll examine the use case, convert it into a technical one and discuss a baseline approach. We’ll then work on implementing a better and smarter solution, using a combination of Machine Learning algorithms. We’ll discuss performance, pros and cons, as well as future steps.
Abstract: The workshop will present different approaches to voice cloning considering constraints in terms of data volume, computing power and available tools. We will guide the audience through the implementation details using both open source and off the shelf tools. The advantages and tradeoffs of each method will be illustrated by means of experimental results.
Abstract: This workshop will cover some fundamental aspects of Data Science and Machine learning such as Exploratory Data Analysis, Supervised (linear regression, logistic regression, decision/regression trees and random forests, k nearest neighbors) and Unsupervised Learning Methods (hierarchical, k-means, dbscan), Dimensionality-reduction Methods (Principal Component Analysis), train/validation/test methods and regularization as ways for preventing overfitting, Reinforcement Learning. Additional examples covering computer vision methods and computational models for material science will be covered, time-allowing. This workshop will cover some fundamental aspects of Data Science and Machine learning such as: – Exploratory Data Analysis, Supervised (linear regression, logistic regression, decision/regression trees and random forests, k nearest neighbors) – Unsupervised Learning Methods (clustering: hierarchical, k-means, dbscan) – Dimensionality-reduction Methods (Principal Component Analysis) – Train/validation/test methods and regularization as ways for preventing overfitting – Reinforcement Learning.
Abstract: In this workshop, we’ll cover the basics of fuzz testing, the powerful automated software testing tool used by security professionals, researchers, and hackers to find severe bugs and vulnerabilities in mission-critical software that have eluded manual testing. We will learn about the most effective modern fuzzing techniques, and why security teams at companies like Google and Microsoft trust fuzzing to detect the severe vulnerabilities that developers miss. To date, fuzzing has found over 20 thousand bugs in Google Chrome, and has been used to detect 100s of bugs in Windows prior to release. We’ll go hands-on and learn how fuzzing could have caught Heartbleed, the bug that cost the internet half a billion dollars to fix. Finally, we’ll apply what we learned to find a vulnerability in a web application, going through the four important steps of fuzzing: Attack Surface Analysis, Harness Building, Instrumentation, and Scaling. For more information on fuzz testing, you can read our high-level intro here: https://blog.fuzzbuzz.io/what-is-fuzz-testing/
Abstract: During the first hour the audience will learn how to solve any business problem that involves data science as a solution using a design thinking approach. Audiences will gain knowledge of data science processes, as well as a better understanding of why it is critical to identify the appropriate business problem. Audiences will also gain hands-on experience in determining whether a business problem is a data science objective, as well as a step-by-step approach to transforming a business use case into data science objectives. During the second hour, the audience will understand the significance of data storytelling and the creation of compelling reports. In order to connect your data to the influential, emotional side of the brain, package your insights as a data story. Decisions are frequently based on emotion, not logic, as neuroscientists have confirmed. Learn how to create powerful delivery mechanisms for sharing insights and ideas in a memorable, persuasive, and engaging manner.
Abstract: Needless to say that the last 18 months have been a particularly challenging time. Our daily lives have been transformed across the world, in a race to control the pandemic. But Humans are problem solvers at heart, coming together to tackle this challenge among others this decade while facing more pressing environmental issues. Let’s remind ourselves which challenges are shaping our society, environment, and industries this next decade. And celebrate the inventiveness to tackle them, through concrete examples of deeptech bringing technological frontiers and breakthroughs to our daily lives.
Abstract: Bias in ML systems is a serious risk. Also This talk is focused on giving you a good overview of what it is, how it originates and finally some pointers on how to handle/minimise it.
Abstract: Machine learning models do not last forever. Once they go live, they start to degrade. Sudden changes or data drift can also affect performance. Monitoring helps us keep an eye on our models and debug if something goes wrong. In this talk, Emeli will share best practices around ML model monitoring and show a demo using open-source tools. She will cover: – What can go wrong in production – How to keep an eye on the models when you do not have an immediate feedback loop and check for data and prediction drift – How to generate regular reports on model performance and what to look for.
Abstract: Design doesn’t need to be hard! In this talk, we will cover how we are leveraging machine learning at uizard.io to make the world’s first ML-powered design tool. We will discuss why design is a challenging field to apply machine learning to, as well as the potential of machine learning as a democratization tool.
Abstract: In this talk we’ll explore how machine learning can be applied to satellite imagery for accurate and up-to-date insights about the state of the world’s forests and other vegetation. The first part of the talk will give a brief overview of remote sensing and recent advances in Deep Learning, with a real-world example of country-scale deforestation monitoring. The second part will build on this to describe (and show solutions) the challenges of running large scale cloud-based computing infrastructure for high resolution risk analysis in the domain of fire prevention. Finally, against the backdrop of the climate crisis, the talk will hope to inspire software engineers, data-scientists and others to use our skills to take and accelerate climate action.
Abstract: Videos are a rich source of multi-modal supervision. In this talk, we will describe techniques to learn representations by leveraging three modalities naturally present in videos: visual, audio and language streams. In the first part of the talk, we will introduce the notion of a multimodal versatile network — a network that can ingest multiple modalities and whose representations enable downstream tasks in multiple modalities. In particular, we explore how best to combine the modalities, such that fine-grained representations of the visual and auditory modalities can be maintained, whilst also integrating text into a common embedding. We demonstrate how such networks trained on large collections of unlabelled video data can be applied on video, video-text, image and audio tasks. Furthermore, in the second part of the talk we introduce BraVe, a self-supervised learning framework for video. In BraVe, one of the views has access to a narrow temporal window of the video while the other view has a broad access to the video content. Our models learn to generalise from the narrow view to the general content of the video. We demonstrate that MMV and BraVe achieve state-of-the-art results in self-supervised representation learning on standard video and audio classification benchmarks.
Abstract: Reference architectures are tricky things; generally, technology architectures don’t tend to be one-size-fits-all. Different scenarios benefit from different approaches. And yet, we want our solutions to implement the (collective) learnings and best practices from our previous work. We want to avoid having everyone learn the same lessons over and over, we want our solutions to be constructed in a similar manner so that developers can easily move between projects and quickly understand the basic architecture of the solution. Therefore, this reference architecture will target a specific scenario – the enterprise IoT use case, where mature organizations are pursuing a collection of applications built around the data collected from one or more sets of connected devices. We will, in fact, be designing a platform that is designed for flexibility, extensibility, reliability, and with the ability to scale as needed.
Abstract: The talk presents Mempathy videogame as a Safety and Alignment opportunity and the results and lessons learnt in implementing controlled language generation with Plug and Play Models (PPLM) for NPC design.
Abstract: Industry: Betting and gaming. Infrastructure: In the cloud. Company focus: Excellence in customer experience. Possibilities: Endless! We will walk you through possible implementation steps to prepare, create and push to production an ML model. From gathering data to near real-time action within business units, the goal is to classify a customer as VIP as early as possible to improve CX.
Abstract: AI use cases from different industries for Business growth and how it’s crucial to build a new set of skills to build Trusted AI products.
Abstract: In every mature industry, quality assurance plays a key role in ensuring product adoption and preserving the company’s reputation. When it comes to the software industry, source code integrity is often overlooked in favor of new features, sophisticated functionality, and go-to-market speed. Source code quality assurance often is deprioritized, despite the fact that it is one of the more critical factors in determining a product’s fate. For companies concerned about building a solid foundation for product development and risk elimination, a systematic assessment of source code quality is essential. CodeWeTrust’s c2m is an AI-driven simple, complete, fully automated recommendation system that supports software maintenance, support cost reduction, web-security risk mitigation, and OSS license compliance. Leveraging advanced machine learning, c2m ensures solid decisions backed by data analysis and transparency
Abstract: In 2009, the Japanese lunar orbiter SELENE discovered a hole in the Moon. As deep as a 38-story building, the giant pit exposes a cut through the ancient geological history of the Moon, and opens into a large underground space whose extent is still unknown. This presentation discusses a mission concept called “Moon Diver” whose aim is to use an extreme-terrain, rappelling rover called Axel to descend into the pit and enter the lunar subsurface for the first time in history.