
How AI Happens is a podcast featuring experts and practitioners explaining their work at the cutting edge of Artificial Intelligence. Tune in to hear AI Researchers, Data Scientists, ML Engineers, and the leaders of today’s most exciting AI companies explain the newest and most challenging facets of their field. Powered by Sama.
Mar 16, 2023
Kristen is also the founder of Data Moves Me, a company that offers courses, live training, and career development. She hosts The Cool Data Projects Show, where she interviews AI, machine learning (ML), and deep learning (DL) experts about their projects. Points From This Episode:Kristen’s background in the data science world and what led her to her role at Comet.What it means to be a developer advocate and build community.Some of the coolest AI, ML, and DL ideas from The Cool Data Projects Show!One of the computer vision projects Kristen is working on that uses Kaggle datasets.How Roboflow can help you deploy a computer vision model in an afternoon.The amount of data that is actually needed for object detection.Solving the challenge of contextualization for computer vision models.A look at attention mechanisms in explainable AI and how to tackle large datasets.Insight into the motivations behind Kristen’s school bus project.The value of learning through building and solving “real” problems.How Kristen’s background as a data scientist lends itself to computer vision.Free and easily-available resources that others in the space have created to assist you.Advice for those forging their own careers: get involved in the community!Tweetables:“I’m finding people who are working on really cool things and focusing on the methodology and approach. I want to know: how did you collect your data? What algorithm are you using? What algorithms did you consider? What were the challenges that you faced?” — @DataMovesHer [0:05:55]“A lot of times, it comes back to [the fact that] more data is always better!” — @DataMovesHer [0:15:40]“I like [to do computer vision] projects that allow me to solve a problem that is actually going on in my life. When I do one, suddenly, it becomes a lot easier to see other ways that I can make other parts of my life easier.” — @DataMovesHer [0:18:59]“The best thing you can do is to get involved in the community. It doesn’t matter whether that community is on Reddit, Slack, or LinkedIn.” — @DataMovesHer [0:23:32]Links Mentioned in Today’s Episode:Data Moves MeCometThe Cool Data Projects ShowMothers of Data ScienceKristen Kehrer on LinkedInKristen Kehrer on TwitterKristen Kehrer on InstagramKristen Kehrer on YouTubeKristen Kehrer on TikTokKaggleRoboflowKangas LibraryHow AI HappensSama
00:26:09
Mar 01, 2023
In this episode, we learn the benefits of blue-collar AI education and the role of company culture in employee empowerment. Dr. Borne shares the history of data collection and analysis in astronomy and the evolution of cookies on the internet and explains the concept of Web3 and the future of data ownership. Dr. Borne is of the opinion that AI serves to amplify and assist people in their jobs rather than replace them and in our conversation, we discover how everyone can benefit if adequately informed.Key Points From This Episode:Data scientist and astrophysicist, Dr. Kirk Borne’s vast background.The history of data collection and analysis in astronomy.How Dr. Borne fulfills his passion for educating others.DataPrime’s blue-collar AI education course.How AI amplifies your work without replacing it.The difference between efficiency and effectiveness.The difference between educating blue-collar students and graduate students.The goal of the blue-collar AI course. The ways in which automation and digital transformation are changing jobs.Comparison between the AI revolution (the fourth industrial revolution) and previous industrial revolutions.The role of company culture in employee empowerment.Dr. Borne’s approach to teaching AI education.Dr. Borne shares a humorous Richard Feynman anecdote.The concept of Web3 and the future of data ownership.The history and evolution of cookies on the internet.The ethical concerns of AI.Tweetables:“[AI] amplifies and assists you in your work. It helps automate certain aspects of your work but it’s not really taking your work away. It’s just making it more efficient, or more effective.” — @KirkDBorne [0:11:18]“There’s a difference between efficiency and effectiveness … Efficiency is the speed at which you get something done and effective means the amount that you can get done.” — @KirkDBorne [0:11:29]“There are different ways that automation and digital transformation are changing a lot of jobs. Not just the high-end professional jobs, so to speak, but the blue-collar gentlemen.” — @KirkDBorne [0:18:06]“What we’re trying to achieve with this blue-collar AI is for people to feel confident with it and to see where it can bring benefits to their business.” — @KirkDBorne [0:24:08]“I have yet to see an auto-complete come over your phone and take over the world.” — @KirkDBorne [0:26:56]Links Mentioned in Today’s Episode:Kirk Borne, Ph.D.Kirk Borne, Ph.D. on LinkedInKirk Borne, Ph.D. on TwitterRichard FeynmanJennyCoAlchemy ExchangeBooz Allen HamiltonDataPrimeHow AI HappensSama
00:35:49
Feb 23, 2023
Goodbye Passwords, Hello Biometrics with George WilliamsEpisode 61: Show Notes.Is it really safer to have a system know your biometrics rather than your password? If so, who do you trust with this data? George Williams, a silicon valley tech veteran who most recently served as Head of AI at SmileIdentity, is passionate about machine learning, mathematics, and data science. In this episode, George shares his opinions on the dawn of AI, how long he believes AI has been around, and references the ancient Greeks to show the relationship between the current fifth big wave of AI and the genesis of it all. Focusing on the work done by SmileIdentity, you will understand the growth of AI in Africa, what and how biometrics works, and the mathematical vulnerabilities in machine learning. Biometrics is substantially more complex than password authentication, and George explains why he believes this is the way of the future.Key Points From This Episode:Georges's opinions on the genesis of AI.The link between robotics and AI.The technology and ideas of the Ancient Greeks, in the time of Aristotle.George’s career past: software engineer versus mathematics.What George’s role is within SmileIdentity.How Africa is skipping passwords and going into advanced biometrics.How George uses biometrics in his everyday life,Quantum supremacy: how it works and its implications.George’s opinions on conspiracy theories about the government having personal information.Why understanding the laws and regulations of technology is important.The challenges of data security and privacy.Some ethical, unbiased questions about biometrics, mass surveillance, and AI.George explains ‘garbage in, garbage out’ and how it relates to machine learning.How SmileIdentity is ensuring ethnic diversity and accuracy.How to measure an unbiased algorithm.Why machine learning is a life cycle. The fraud detection technology in SmileIdentity biometric security.The shift of focus in machine learning and cyber security.Tweetables:“Robotics and artificial intelligence are very much intertwined.” — @georgewilliams [0:02:14]“In my daily routine, I leverage biometrics as much as possible and I prefer this over passwords when I can do so.” — @georgewilliams [0:08:13]“All of your data is already out there in one form or another.” — @georgewilliams [0:10:38]“We don’t all need to be software developers or ML engineers, but we all have to understand the technology that is powering [the world] and we have to ask the right questions.” — @georgewilliams [0:11:53]“[Some of the biometric] technology is imperfect in ways that make me uncomfortable and this technology is being deployed at massive scale in parts of the world and that should be a concern for all of us.” — @georgewilliams [0:20:33]“In machine learning, once you train a model and deploy it you are not done. That is the start of the life cycle of activity that you have to maintain and sustain in order to have really good AI biometrics.” — @georgewilliams [0:22:06]Links Mentioned in Today’s Episode:George Williams on TwitterGeorge Williams on LinkedInSmileIdentityNYU Movement LabChatGPTHow AI HappensSama
00:33:25
Dec 15, 2022
Our discussion today dives into the climate change related applications of AI and machine learning, and how organizations are working towards mobilizing them to address the climate problem. Priya shares her thoughts on advanced technology and creating a dystopian version of humanity, what made her decide on her Ph.D. topic, and what she learned touring the world interviewing power grid experts around the world.Key Points From This Episode:Priya shares her take on ChatGPT.We talk about ChatGPT guard rails and whether it should be done manually or with built in technology that automatically detects issues.Concerns with the concept of advanced technology and creating a dystopian version of humanity. What made Priya want to get into her particular Ph.D. topic.What surprised her about her tour around the world interviewing people. Priya explains what she means by a 'systems problem.'Machine learning and AI in power grids; what is the thrift of opportunity?Priya speaks to the reason why she found a climate change AI organization.Narrowing the focus, in AI and Climate Change, as an organization.Priya shares an example of what work looks like for someone in a role with machine learning and climate change. Recent wins in the climate change world and how they measure the success of their progress. The gap between the vision of where she is now and where she wants to be in the medium term. Tweetables:“When we are working on climate change related problems, even ones that are “technical problems” every problem is basically a socio-political technical problem, and really understanding that context when we move that forward can be really important.” — @priyald17 [0:10:02]“Machine learning in power grids and really in a lot of other climate relevance sectors can contribute along several themes or in several ways.” — @priyald17 [0:12:18]“What prompted us to found this organization, Climate Change AI, [is] to really help mobilize the AI machine learning community towards climate action by bringing them together with climate researchers, entrepreneurs, industry, policy, all of these players who are working to address the climate problems and sort of to do that together.” — @priyald17 [0:17:21]Longer quote“So the whole idea of Climate Change AI is rather than just focusing on what can we as individuals who are already in this area do to do research projects or deployment projects in this area, how can we sort of mobilize the broader talent pool and really help them to connect with entities that are really wanting to use their skills for climate action.” — @priyald17 [0:19:17]Links Mentioned in Today’s Episode:Priya DontiPriya Donti on TwitterPutting the Smarts in the Smart GridClimate Change AIClimate Change AI Interactive SummariesHow AI HappensSama
00:30:20
Dec 08, 2022
Genetec has been a software provider for the physical security industry for over 25 years, earning its spot as the world’s number one software provider in video management. We are pleased to be joined today by Florian Matusek, Genetec’s Director of Video Analytics and the host of Video Analytics 101 on YouTube. Florian explains how his company is driving innovation in the market and what his specific role is before divining into the importance of maintaining both security and privacy, this new wave of special analytics, and why real-time improvements are more difficult than back-end adjustments. Our guest then lists all the exciting things he is witnessing in the world of video analytics and what he hopes to see in re-identification and gait analysis in the future. We discuss synthetic data and whether it will ever be commoditized and close with an exploration of the probable future of grocery stores without any employees. Key Points From This Episode:A warm welcome to the Director of Video Analytics at Genetec, Florian Matusek.How the Video Analytics 101 YouTube channel was formed. The purpose of his YouTube channel and its ideal viewer. What his company does and what his role entails. How Genetec has transformed as a company from its inception until now. The insights Florian hopes to provide to his customers through video analytics.Genetec’s new technology that upholds both security and privacy. Exploring the new wave of spatial analytics. The difference between real-time improvements and gradual, back-end adjustments. New use cases, techniques, and trends that Florian finds exciting. The perks and problems of re-identification. Whether the current technology of gait analysis is reliable and how it relates to re-identification. How technology is evolving to include time-based data collection. The difficulties he experiences in collecting video data to train his models. Whether there’s an opportunity for synthetic data to augment his data strategy.Florian’s thoughts on synthetic data becoming commoditized. Some interesting ways that Genetec’s clients are using its technology. The video analytics behind the automated drinks system at the Denver Broncos stadium. How close we are to a future of grocery stores with no employees or cash registers. Tweetables:“Nowadays, it's about automation. It's about operational efficiency. It's about integrating video and access control, and license plate recognition, IoT sensors, all into one platform, and providing the user a single pane of glass.” — Florian Matusek [0:05:11]“We will always build products that benefit our users, which is the security operators, the ones purchasing it. But at the same time, we see it as our responsibility to also do everything possible to protect the privacy of the citizens that our customers are recording.” — Florian Matusek [0:09:03]“What gets me excited are solutions that are really targeted for a specific purpose and made perfect for this purpose.” — Florian Matusek [0:11:24]“You need both synthetic data and real data in order to make the real applications work really well.” — Florian Matusek [0:21:42]“It's really funny how customers come up with creative ways to solve their specific problems.” — Florian Matusek [0:26:36]Links Mentioned in Today’s Episode:Florian Matusek on LinkedInVideo Analytics 101 on YouTubeGenetecGenetec on YouTubeHow AI HappensSama
00:30:59
Nov 18, 2022
Navrina shares why trust and transparency are crucial in the AI space and why she believes having a Chief Ethics Officer should become an industry standard. Our conversation ends with a discussion about compliance and what AI tech organizations can do to ensure reliable, trustworthy, and transparent products. To get 30 minutes of uninterrupted knowledge from The National AI Advisory Committee member, Mozilla board of directors member, and World Economic Forum young global leader Navrina Singh, tune in now!Key Points From This Episode:Welcoming today’s guest, CEO and Founder of Credo AI, Navrina Singh. A look at Navrina’s recent background and why she decided to start Credo AI.Why it’s important to take responsibility for the technology you create.The reasons why the AI technology industry chose to create its own systems of oversight.Why trust is a crucial part of the AI technology sector. How Credo AI helps companies engage with issues of transparency and trust. The people at various companies who are in charge of AI governance that Credo deals with. Who Navrina thinks should be responsible for AI governance at every company. Where Credo’s clients usually fall short when it comes to compliance.What AI technology companies should be thinking about beyond compliance. Navrina’s view on what organizations can do to ensure reliable, trustworthy, and transparent tech.Tweetables:“I always saw technology as the tool that would help me change the world. Especially growing up in an environment where women don’t have the luxury that some other people have, you tend to lean on things that can make your ideas happen, and technology was that for me.” —@navrinasingh [0:01:17]“As technologists, it’s our responsibility to make sure that the technologies we are putting out in the world that are becoming the fabric of our society, we take responsibility for it.” —@navrinasingh [0:04:04]“By its very nature, trust is all about saying something and then consistently delivering on what you said. That’s how you build trust.” —@navrinasingh [0:08:58]“I founded Credo AI for a reason, to bring more honest accountability in artificial intelligence.” —@navrinasingh [0:10:45]“We are going to see more trust officers and trust functions emerge within organizations, but I am not really sure if a chief ethics officer is going to emerge as a core persona, at least not in the next two to three years. Is it needed? Absolutely, it’s needed.” —@navrinasingh [0:17:32]Links Mentioned in Today’s Episode:Navrina Singh on TwitterNavrina Singh on LinkedInCredo AIThe National AI Advisory CommitteeWorld Economic ForumDr. Fei-Fei Li on LinkedInHow AI HappensSama
00:29:25
Nov 10, 2022
Arize and its founding engineer, Tsion Behailu, are leaders in the machine learning observability space. After spending a few years working as a computer scientist at Google, Tsion’s curiosity drew her to the startup world where, since the beginning of the pandemic, she has been building breaking-edge technology. Rather than doing it all manually (as many companies still do to this day), Arize AI technology helps machine learning teams detect issues, understand why they happen, and improve overall model performance. During this episode, Tsion explains why this method is so advantageous, what she loves about working in the machine learning field, the issue of bias in machine learning models (and what Arize AI is doing to help mitigate that), and more! Key Points From This Episode:Tsions’s career transition from computer science (CS) into the machine learning (ML) space.What motivated Tsion to move from Google to the startup world.The mission of Arize AI.Tsion explains what ML observability is.Examples of the Arize AI tools and the problems that they solve for customers.What the troubleshooting process looks like in the absence of Arize AI.The problem with in-house solutions.Exploring the issue of bias in ML models.How Arize AI’s bias tracing tool works.Tsion’s thoughts on what is most responsible for bias in ML models and how to combat these problems.Tweetables:“We focus on machine learning observability. We're helping ML teams detect issues, troubleshoot why they happen, and just improve overall model performance.” — Tsion Behailu [0:06:26]“Models can be biased, just because they're built on biased data. Even data scientists, ML engineers who build these models have no standardized ways to know if they're perpetuating bias. So more and more of our decisions get automated, and we let software make them. We really do allow software to perpetuate real world bias issues.” — Tsion Behailu [0:12:36]“The bias tracing tool that we have is to help data scientists and machine learning teams just monitor and take action on model fairness metrics.” — Tsion Behailu [0:13:55]Links Mentioned in Today’s Episode:Tsion BehailuArize Bias Tracing ToolArize AIHow to Know When It's Time to Leave your Big Tech SWE Job -- Tsion BehauliHow AI HappensSama
00:25:09
Oct 28, 2022
Ian discusses what unique problems aerial automated vehicles face, how segregations in the air affect flying, how the vehicles land, and how they know where to land. Animal Dynamics' goal is to phase out humans in their technology entirely and Ian explains the human involvement in the process before telling us where he sees this technology fitting in with disaster response in the future. Key Points From This Episode:An introduction to today’s guest, Ian Foster. A brief overview of Ian’s background and how he ended up at Animal Dynamics.Ian shares the mission of Animal Dynamics and how that’s being carried out.What the delivery mechanism is and what the technology is delivering.Why air is best for this kind of delivery and why it’s best not to use pilots.The challenges in an aerial automated vehicle. How segregations in the air affect this technology and how they’re combatting these issues. Ian tells us which is more difficult: to park a car autonomously or land a plane autonomously. How their vehicles land themselves. How they are training the technology to notice safe landing zones.How humans come into this AI technology and why they’re being phased out slowly. What Ian thinks the future and long-term opportunities are for Animal Dynamic’s technology.Tweetables:“Drawing inspiration from the natural world to help address problems is very much the ethos of what Animal Dynamics is all about.” — Ian Foster [0:02:06]“Data for autonomous aircraft is definitely a big challenge, as you might imagine.” — Ian Foster [0:16:17]We're not aiming to just jump straight to full autonomy from day one. We operate safely within a controlled environment. As we prove out more aspects of the system performance, we can grow that envelope and then prove out the next level.” — Ian Foster [0:19:01]“Ultimately, the desire is that the systems basically look after themselves and that humans are only involved in telling the thing where to go, and then the rest is delivered autonomously.” — Ian Foster [0:23:45]“The important thing for us is to get out there and start making a difference to people. So we need to find a pragmatic and safe way of doing that.” — Ian Foster [0:23:57]Links Mentioned in Today’s Episode:Ian Foster on LinkedInAnimal DynamicsHow AI HappensSama
00:28:31
Oct 20, 2022
Curren is a curious, driven, and creative leader with vast experience in data science and AI. Her original background was in neuroscience and cognitive neuroscience but entered the industry when she realized how much she enjoyed programming, maths, and statistics. Additionally, her biology background gave her an advantage, making her a perfect fit for managing the neuroscience portfolio for Johnson & Johnson. In our conversation with Curren, we learn about her professional background, how her biology background is an advantage, and what she enjoys most about data science, as well as the important work she does at Johnson & Johnson. We then talk about AI in the pharmaceutical industry, how it is used, what it is used for, the benefits of AI both to the company and patients, and her approach to tackling data science problems. She also tells us what it was like moving into a leadership role and shares some advice for people wanting to take the plunge into leadership. Key Points From This Episode:Curren’s professional background and how she ended up in her role at Johnson & Johnson.The connection between traditional neuroscience and neural networks in AI.Ways in which traditional scientific education in neurology informs AI.How much we currently understand about human learning.Curren explains her role and responsibilities in her position at Johnson & Johnson.What the term ‘precision’ means in her line of work and examples.Outline of Curren’s approach to data science and her role at Johnson & Johnson.We find out what Curren’s definition of success is.The significant benefits of optimizing processes and procedures.Curren outlines the various ways AI is deployed at Johnson & Johnson.Her experience moving from an individual contributor role into a leadership role.Advice Curren has for people who are considering entering a leadership role.The importance of trusting your team as a leader.Tweetables:“Finding new ways to use data to drive diagnosis is a big focus for us.” — @CurrenKatz [0:11:56]“In data science, it can be challenging to define success. But choosing the right problem to solve can make that a lot easier.” — @CurrenKatz [0:15:27]“I want the best data scientists in the world and to have those people on my team or the best managers in the world. I just need to give them the space to be successful.” — @CurrenKatz [0:23:55]Links Mentioned in Today’s Episode:Curren Katz on LinkedInCurren Katz on TwitterJohnson & JohnsonJohnson & Johnson on LinkedInSama
00:25:02
Oct 06, 2022
Dr. Kruft unpacks how she went from earning a Ph.D. focused on quantum chemistry, to working in AI and machine learning. She shares how she first discovered her love of data science, and how her Ph.D. equipped her with the skills she needed to transition into this new and exciting field. We also discuss the data science approach to problem-solving, deep learning emulators, and the impact that machine learning could have on the natural sciences. Key Points From This Episode:Introducing today's guest, Bonnie Kruft, Senior Director at Microsoft’s AI for Science.A quick look at Bonnie’s background and the research she is currently doing.The work that Bonnie did on quantum chemistry for her Ph.D. dissertation.How quantum chemistry led to her working in the field of AI.An overview of the transferable skills that Bonnie picked up during her Ph.D.Learn about Bonnie’s work with pharmaceutical companies.How Bonnie became interested in data science and machine learning.The data science approach to problem-solving.The concept of falling faster and how to facilitate it.What the word ‘quantum’ means and how it applies to computing.How Bonnie’s Ph.D. prepared her for a career in machine learning.The impact that machine learning could have on the natural sciences.A breakdown of the four paradigms through which science has evolved.The emulator approach and how it can apply to anywhere data science is being done.Learn about Microsoft's AI for science and what they are doing with machine learning.What Bonnie’s typical day looks like.Tweetables:“Although I wasn't really working on machine learning, or data science during my Ph.D., there's a lot of transferable skills that I picked up along the way while I was working on quantum chemistry.” — Bonnie Kruft [0:03:00]“We believe that deep learning could have a really transformational impact on the natural sciences.” — Bonnie Kruft [0:13:02]“The idea is that deep learning emulators will be used for the things that are going to make the most impact on the world. Solving healthcare challenges, combating disease, combating climate change, and sustainability. Things like that.” — Bonnie Kruft [0:21:29]Links Mentioned in Today’s Episode:Bonnie Kruft on LinkedInMicrosoftHow AI HappensSama
00:26:26
Sep 22, 2022
In our conversation, we discuss Brandon's approach to problem-solving, the use of synthetic data, challenges facing the use of AI in drug development, why the diversity of both data and scientists is important, the three qualities required for innovation, and much more.Key Points From This Episode:We hear about Brandon’s unconventional background and professional career journey. Why he has a passion for combining AI and machine learning with biology.An outline of the Opal platform and how it is used for drug discovery.Brandon’s approach to innovating and improving various stages of pharmaceutical development.Whether or not he thinks his approach can be applied outside of pharmaceutical development.How data science is used in traditional companies and how this differs at Valo.What signs people should look out for to ensure they are at a data-driven organization. A brief discussion about the benefits of using non-traditional approaches. Ways in which Brandon sees synthetic data being used in the future.The biggest challenge currently limiting the use of synthetic data. A breakdown of the three competing qualities that are required to innovate.Reasons why Brandon thinks current algorithms and the underlying datasets need to be improved. Brandon shares his approach to ensuring fairness and rooting out bias in datasets.Another problem the industry faces with scientists: a lack of diversity.The value of re-weighting a training set.Innovations in AI and machine learning that keeps Brandon motivated and inspired.Tweetables:“Instead of improving the legacy, is there a way to really innovate and break things? And that’s the way we think about it here at Valo.” — @allg00d [0:08:46]“Here at Valo, if data scientists have good ideas, we let them run with them, you know? We let them commission experiments. That’s not generally the way that a traditional organization would work.” — @allg00d [0:11:31]“While you might be able to get synthetic data that represents the bulk, you are not going to get the resolution within those patients, within those subgroups, within the patient set.” — @allg00d [0:15:15]“We suffer right now from a lack of diversity of data, but then, on the other side, we also suffer as a field from lack of diversity in our scientists.” — @allg00d [0:19:42]Links Mentioned in Today’s Episode:Brandon AllgoodBrandon Allgood on LinkedInValoValo on LinkedInOpal platformDALI AllianceLogicaBrandon Allgood on TwitterRob Stevenson on LinkedInSama
00:28:54
Sep 15, 2022
In this episode, Heather shares her background in both farming and commerce, and explains how her in-field experience and insights aid both her and the AI team in the development cycle. We learn about the advantages of drone-based precision spraying, the function of the herbicides that Precision AI’s drones spray onto crops, and the various challenges of creating AI models that can recognize plant variations. Key Points From This Episode:Introducing Heather Clair, Product Manager at precision.ai.Heather’s background in farming and commerce; and what led her to precision.ai.precision.ai’s dramatically different approach to agriculture.The advantages of drone-based precision spraying, as opposed to land-based high-clearance spraying.The function of the herbicides that precision.ai’s drones spray onto crops.precision.ai’s use of AI to teach their drones to identify crops and distribute herbicides with precision. The relationship between Heather, as product manager, and the AI experts at precision.ai.Heather’s involvement in the development cycle.Sama’s reliable accuracy rate.The challenge of creating AI models that recognize and can work with plant variations.How the varying colors of soil impact the AI models.The phenomenon of phenoplasticity and the challenge it presents to the AI team. The advantage Heather has of having in-field experience.Heather’s closing tip: how to have happier, healthier houseplants.Tweetables:“Up until now, everybody just went, ‘How do we get more efficient [with] fewer passes?’ But nobody questioned, ‘Are we doing the passes with the right equipment?’” — Heather Clair [0:07:07]“[precision.ai is] moving from land-based high-clearance sprayers to drone-based precision spraying.” — Heather Clair [0:07:24]“I never thought when I was a little farm kid that I would be playing with drones, but it is one of my favorite things to do.” — Heather Clair [0:07:45]“Trying to create these AI models that can work on any stage of plant can be a challenge.” — Heather Clair [0:21:15]“It's incredible how working with my AI team has opened up my eyes to being able to look at these plants from a very logical standpoint.” — Heather Clair [0:25:34]Links Mentioned in Today’s Episode:Heather Clair on LinkedInprecision.aiSama
00:28:49
Aug 24, 2022
Xiaoyang Yang, Head of Data AI Security and IT over at Second Dinner Studios, explains how Second Dinner navigates the issue of excess data with intention and discover the metrics that go deeper than the surface to measure the quality of competition, balance, and fairness within gaming. Xiaoyang also describes the difference between AI and gaming AI and shows us how each can be used to enhance the other. Listen to today’s episode for a careful look at how AI can be used to improve player experience and how gaming can act as a testing ground to improve AI in everyday life. Key Points From This Episode:Introducing Xiaoyang Yang, head of Data AI Security and IT at Second Dinner Studios.His recently launched video game, MARVEL SNAP.How he uses data as a tool to listen to players before translating it into insights.The role of scale and how it changes the parameters around which players you attract.The discrepancy between how different players experience the same feature.Xiaoyang’s background in theoretical physics, machine learning, and gaming.How an internship at Blizzard helped him enter a new industry.His time working on World of Warcraft and with Riot Games.Second Dinner’s partnership with Marvel to create MARVEL SNAP.Xiaoyang’s aim to use data to make the game accessible to a wider audience who hasn’t tried collectible card games before. The issue of excess data and how Second Dinner combats this with careful intention.Data metrics that go deeper to enhance design and balance.Competition, fairness, and balance as indicators for how fun a game will be for players.How AI can be used to test fairness and balance in gaming.How game AI differs from AI in general and how each can be used to inform the other. The competitive experience you can have with gaming AI due to different skill levels.The new experience you can offer users today that has been facilitated by AI. Tweetables:“We try to really listen to what our players are saying. One way to do that is through data. We use data as a tool.” — Xiaoyang Yang [0:02:28]“When you see the scale, you begin to really understand that different players have different desires. Sometimes, different players see the same feature or the same experience in a very different type of way.” — Xiaoyang Yang [0:04:46]“We see a lot of opportunities to use technology data AI to make MARVEL SNAP approachable to a wide audience of players and, hopefully, some players who have never tried the genre of collectible card games.” — Xiaoyang Yang [0:11:25]“We want to make sure that there are different sets of cards you can use to have fun and still be competitive in the game. That's not an easy task.” — Xiaoyang Yang [0:19:25]Links Mentioned in Today’s Episode:Xiaoyang Yang on LinkedInSecond Dinner StudiosMARVEL SNAPBlizzardRiot GamesHow AI HappensSama
00:36:05
Aug 18, 2022
Tune in to hear more about Becks’ role as a lead full stack AI engineer at Rogo, how they determine what should and should not be added into the product tier for deep learning, the types of questions you should be asking along the investigation-to-product roadmap for AI and machine learning products, and so much more!Key Points From This Episode:An introduction to today’s guest, Lead Full Stack AI Engineer, Becks Simpson. Becks’ cover band Des Confitures made up of machine learning engineers and other academics. Becks’ career background and how she ended up in her role at Rogo. How Rogo enables people to unlock or make sense of unstructured or unorganized data.Why Becks’ role could be compared to that of an AI Swiss Army Knife. How they determine what should and should not be added to the product tier for deep learning. Becks’ experience of having to give someone higher up a reality check about the technical needs of their product. Why Becks believes there are so many nontechnical hats you need to wear as an AI or ML expert. Thoughts on the trend of product managers being taught how to do AI but not AI people being taught to do product management.The importance of bringing data about data into the conversation. The types of questions you should be asking and where the answers to understanding your dataset will then take you. How the investigation-to-product roadmap is not something you would learn in academia for AI machine learning and why it should be. Thoughts as to why it is so common for someone to have one foot in the industry and one foot in academia. An area of AI machine learning that Becks is truly excited about: off the shelf models. Tweetables:“People think that [AI] can do more than what it can and it has only been the last few years where we realized that actually, there’s a lot of work to put it in production successfully, there’s a lot of catastrophic ways it can fail, there are a lot of considerations that need to be put in.” — Becks Simpson [0:11:39]“Make sure that if you ever want to put any kind of machine learning or AI or something into a product, have people who can look at a road map for doing that and who can evaluate whether it even makes sense from an ROI business standpoint, and then work with the teams.” — Becks Simpson [0:12:55]“I think for the people who are in academia, a lot of them are doing it to push the needle, and to push the state of the art, and to build things that we didn’t have before and to see if they can answer questions that we couldn’t answer before. Having said that, there’s not always a link back to a practical use case.” — Becks Simpson [0:20:25]“Academia will always produce really interesting things and then it’s industry that will look at whether or not they can be used for practical problems.” — Becks Simpson [0:21:59]Links Mentioned in Today’s Episode:Becks Simpson RogoDes Confitures Montreal Institute of Learning AlgorithmsSama
00:25:15
Aug 11, 2022
Dr. Seymour aims to take cutting-edge technology and apply it to the special effects industry, such as with the new AI platform, PLATO. He is also a lecturer at the University of Sydney and works as a consultant within the special effects industry. He is an internationally respected researcher and expert in Digital Humans and virtual production, and his experience in both visual effects and pure maths makes him perfect for AI-based visual effects. In our conversation we find out more about Dr. Seymour’s professional career journey, and what he enjoys the most about working as both a researcher and practitioner. We then get into all the details about AI in special effects as we learn about Digital Humans, the new PLATO platform, why AI dubbing is better, the biggest challenges facing the application of AI in special effects.Key Points From This Episode:Dr. Seymour explains his background and professional career journey. Why he enjoys bridging the gap between researcher and practitioner.An outline of the different topics that Dr. Seymour lectures in and what he is currently working on.He explains what he means by the term ‘digital humans’ and provides examples.The special effects platform, PLATO, he is currently working on and what it will be used for.An explanation of how PLATO was used in the Polish movie, The Champion.He explains the future goals and aims for auto-dubbing using AI and visual effects.Why auto-dubbing procedure will not add or encumber existing processes of making a movie. Reasons why AI auto-dubbing is better than traditional dubbing.Whether this is a natural language processing challenge or more of a creative filmmaking challenge.A discussion about why new technologies take long to be applied to real-world scenarios.How the underlying process of PLATO are different from what is required to make a deepfake video. His approach to overcoming challenges facing the PLATO platform. Other areas of the entertainment industry Dr. Seymour expects AI to be disruptive.Tweetables:“In the film, half the actors are the original actors come back to just re-voice themselves, half aren’t. In the film hopefully, when you watch it, it’s indistinguishable that it wasn’t actually filmed in English. — @mikeseymour [0:10:15]“In our process, it doesn’t apply because if you were saying in four words what I’d said in three, it would just match. We don’t have to match the timing, we don’t have to match the lip movement or jaw movement, it all gets fixed.” — @mikeseymour [0:15:15]“My attitude is, it’s all very well for us to get this working in the lab, but it has to work in the real world.” — @mikeseymour [0:19:56]Links Mentioned in Today’s Episode:Dr. Mike Seymour on LinkedInDr. Mike Seymour on TwitterDr. Mike Seymour on Google Scholar University of SydneyfxguideDr. Paul DebevecPixarDarryl Marks on LinkedInAdapt EntertainmentPLATO Demonstration LinkThe ChampionPinscreenRespeecherRob Stevenson on LinkedInRob Stevenson on TwitterSama
00:31:10
Jul 28, 2022
Ethics in AI is considered vital to the healthy development of all AI technologies, but this is easier said than done. In this episode of How AI Happens, we speak to Maria Luciana Axente to help us unpack this essential topic. Maria is a seasoned AI policy expert, public speaker, and executive and has a respected track record of working with companies whose foundation is in technology. She combines her love for technology with her passion for creating positive change to help companies build and deploy responsible AI. Maria works at PwC, where her work focuses on the operationalization of AI, and data across the firm. She also plays a vital role in advising government, regulators, policymakers, civil society, and research institutions on ethically aligned AI public policy. In our conversation, we talk about the importance of building responsible and ethical AI, while leveraging technology to build a better society. We learn why companies need to create a culture of ethics for building AI, what type of values encompasses responsible technology, the role of diversity and inclusion, the challenges that companies face, and whose responsibility it is. We also learn about some basic steps your organization can take and hear about helpful resources available to guide companies and developers through the process.Key Points From This Episode:Maria’s professional career journey and her involvement in various AI organizations. The motivation which drives AI and machine learning professionals in their careers.How to create and foster a system that instills people with positivity. Examples of companies that have successfully fostered a positive and ethical culture.What are good values for building responsible and ethical technology. We learn about the values the responsible AI toolkit prescribes.Some of the challenges faced when building responsible and ethical technology.An outline of the questions a practitioner can ask to ensure operation by the universal ethics.She shares some helpful resources concerning ethical guidelines for AI. Why diversity and inclusion are essential to building technology. Whose responsibility it should be to ensure the ethical and inclusive development of AI.We wrap up the episode with a takeaway message that Maria has for listeners. Tweetables:“How we have proceeded so far, via Silicon Valley, 'move fast and break things.' It has to stop because we are in a time when if we continue in the same way, we're going to generate more negative impacts than positive impacts.” — @maria_axente [0:10:19]“You need to build a culture that goes above and beyond technology itself.” — @maria_axente [0:12:05]“Values are contextual driven. So, each organization will have their own set of values. When I say organization, I mean both those who build AI and those who use AI.” — @maria_axente [0:16:39]“You have to be able to create a culture of a dialogue where every opinion is being listened to, and not just being listened to, but is being considered.” — @maria_axente [0:29:34]“AI doesn't have a technical problem. AI has a human problem.” — @maria_axente [0:32:34]Links Mentioned in Today’s Episode:Maria Luciana Axente on LinkedInMaria Luciana Axente on TwitterPwC UKPwC responsible AI toolkitSama
00:40:15
Jul 21, 2022
The gap between those creating AI systems and those using the systems is growing. After 27 years on the other side of technology, Mieke decided that it was time to do something about the issues that she was seeing in the AI space. Today she is an Adjunct Professor for Sustainable Ethical and Trustworthy AI at Vlerick Business School, and during this episode, Mieke shares her thoughts on how we can go about building responsible AI systems so that the world can experience the full range of benefits of AI.Key Points From This Episode:An overview of Mieke’s educational and career background.Elements of the AI space that have and haven’t changed since Mieke studied robotics AI in 1992.What drew Mieke back into the AI space five years ago.The importance of understanding the limitations of AI.Mieke shares her thoughts on how to build responsible AI systems.The challenges of building responsible AI systems.Why the European AI Act isn’t able to address the complexities of the AI sector.The missing link between the people creating AI systems and the people using them.Exploring the issue of deep fakes.The role of AI Translators, and an overview of the AI Translator course available in Belgium.Tweetables:“The compute power had changed, and the volumes of data had changed, but the [AI] principles hadn't changed that much. Only some really important points never made the translation.” — @miekedk [0:02:03]“[AI systems] don't automatically adapt themselves. You need to have your processes in place in order to make sure that the systems adapt to the changing context.” — @miekedk [0:04:06]“AI systems are starting to be included into operational processes in companies, but only from the profit side, not understanding that they might have a negative impact on people especially when they start to make automated decisions.” — @miekedk [0:04:52]“Let's move out of our silos and sit together in a multidisciplinary debate to discuss the systems we're going to create.” — @miekedk [0:07:52]Links Mentioned in Today’s Episode:Mieke de KetelaereMieke's BooksThe European AI ActSama
00:23:07
Jul 14, 2022
Today, on How AI Happens, we are joined by the Chief Digital Officer at Allied Digital, Utpal Chakraborty, to talk all things AI at Allied Digital. You’ll hear about Utpal’s AI background, how he defines Allied Digital’s mission, and what Smart Cities are and how the company captures data to achieve them, as well as why AI learning is the right approach for Smart Cities. We also discuss what success looks like to Utpal and the importance of designing something seamless for the end-user. To find out why customer success is Allied Digital’s success, tune in today! Key Points From This Episode:A brief overview of Utpal’s background and how he ended up in his current role at Allied. How Utpal would characterize Allied Digital’s mission. The definition of Smart Cities.How Allied Digital is able to capture the data needed to make a city a Smart City. What made it clear to Utpal that AI machine learning was the right approach for the Smart City services.Insight into what success and an end goal looks like for Utpal. Why it is everyone’s job to design something that is seamless for the end-user. A look at what Utpal thinks has been truly disruptive in the AI space. Tweetables:“I looked at how we can move this [Smart City] tool ahead and that’s where the AI machine learning came into the picture.” — @utpal_bob [0:11:11]“[Allied Digital] wants to bring that wow factor into each and every service product and solution that we provide to our customers and, in turn, that they provide to the industry.” — @utpal_bob [0:16:27]Links Mentioned in Today’s Episode:Utpal Chakraborty on LinkedInUtpal Chakraborty on TwitterAllied Digital ServicesSama
00:20:56
Jul 07, 2022
In today’s conversation, we learn about Jason and Kevin’s career backgrounds, the potential that the deep technology sector has, what ideas excite them the most, the challenges when investing in AI-based companies, what kind of technology is easily understood by the consumer, what makes a technological innovation successful, and much more. Key Points From This Episode:Background and professional paths that led Jason and Kevin to their current roles.Reasons behind starting Calibrate Ventures and originally entering the sector.How the deep-technology sector can solve current problems.What kind of new technology and innovation they get most excited about.The most essential quality of innovative technology: what people want.Rundown of the diverse, experienced, and talented team they work with.Jason shares an example of a technological innovation that solved a real-world problem.How they differentiate the approach when investing in AI companies.What influences the longer sale cycles in the AI technology sector.An example of the challenges when integrating AI technology with the business side.The benefits that Jason and Kevin’s experience adds to the company.Some examples of the kind of technology that translates well to the consumer.We find out what their opinion is about automation and augmentation.We wrap up the show with some advice from Jason and Kevin for AI entrepreneurs. Tweetables:“I think for me personally, the cycle-time was very long. You work on projects for a very long time. As an investor, I get to see new ideas and new concepts every day. From an intellectual curiosity standpoint, there couldn’t be a better job.” — Kevin Dunlap [0:05:17]“So that lights me up. When I hear somebody talk about a problem that they are looking to solve and how their technology can do it uniquely with some type of competitive or differentiated advantage we think is sustainable.” — Jason Schoettler [0:08:14]“The things that really excite us are not, where can we do better than humans but first, where are there not humans work right now where we need humans doing work.” — Jason Schoettler [0:20:44]“Anytime that someone is doing a job that is dangerous, that is able to be solved with technology, I think we owe it to ourselves to do that.” — Kevin Dunlap [0:22:39]Links Mentioned in Today’s Episode:Jason Schoettler on LinkedInKevin Dunlap on LinkedInCalibrate VenturesCalibrate Ventures on LinkedInGrayMatter RoboticsGrayMatter Robotics on LinkedIn
00:26:16
Jun 23, 2022
Whether you’re building AI for self-driving cars or for scheduling meetings, it’s all about prediction! In this episode, we’re going to explore the complexity of teaching the human power of prediction to machines.Key Points From This Episode:Dennis shares an overview of what he has spent his career focusing on.How Dennis defines an intelligent agent.The role of prediction in the AI space.Dennis explains the mission that drove his most recent entrepreneurial venture, x.ai (acquired by Bizzabo).Challenges of transferring humans’ capacity for prediction and empathy to machines.Some of Dennis’s key learnings from his time working on the technology for x.ai.Unrealistic expectations that humans have of machines.How we can teach humans to have empathy for machines. Dennis’s hope for the next generation in terms of their approach to AI.A lesson Dennis learned from his daughter about AI and about human nature.What Dennis is most excited about in the AI space.Tweetables:“The whole umbrella of AI is really just one big prediction engine.” — @DennisMortensen [0:03:38]“Language is not a solved science.” — @DennisMortensen [0:06:32]“The expectation of a machine response is different to that of a human response to the same question.” — @DennisMortensen [0:11:36]Links Mentioned in Today’s Episode:Dennis Mortensen on LinkedInBizzabo [Formerly x.ai]
00:37:19