
How AI Happens is a podcast featuring experts and practitioners explaining their work at the cutting edge of Artificial Intelligence. Tune in to hear AI Researchers, Data Scientists, ML Engineers, and the leaders of today’s most exciting AI companies explain the newest and most challenging facets of their field. Powered by Sama.
Sep 21, 2023
Jennifer is the founder of Data Relish, a boutique consultancy firm dedicated to providing strategic guidance and executing data technology solutions that generate tangible business benefits for organizations of diverse scales across the globe. In our conversation, we unpack why a data platform is not the same as a database, working as a freelancer in the industry, common problems companies face, the cultural aspect of her work, and starting with the end in mind. We also delve into her approach to helping companies in crisis, why ‘small’ data is just as important as ‘big’ data, building companies for the future, the idea of a ‘data dictionary’, good and bad examples of data culture, and the importance of identifying an executive sponsor.Key Points From This Episode:Introducing Jennifer Stirrup and an overview of her professional background.Jennifer’s passion for technology and the exciting projects she is currently working on.Alan Turing’s legacy in terms of AI and how the landscape is evolving.The reason for starting her own business and working as a freelancer.Forging a career in the technology and AI space: advice from an expert.Challenges and opportunities of working as a consultant in the technology sector.Characteristics of AI that make it a high-pressure and high-risk environment.She breaks down the value and role of an executive sponsor.Common hurdles companies face regarding data and AI operations.Circumstances when companies hire Jennifer to help them.Safeguarding her reputation and managing unrealistic expectations. Advice for healthy data practices to avoid problems in the future.Why Jennifer decided on the name Data Relish.Discover how good and reliable data can help change lives.Quotes:“Something that is important in AI is having an executive sponsor, someone who can really unblock any obstacles for you.” — @jenstirrup [0:08:50]“Probably the biggest [challenge companies face] is access to the right data and having a really good data platform.” — @jenstirrup [0:10:50]“If the crisis is not being handled by an executive sponsor, then there is nothing I can do.” — @jenstirrup [0:20:55]“I want people to understand the value that [data] can have because when your data is good it can change lives.” — @jenstirrup [0:32:50]Links Mentioned in Today’s Episode:Jennifer StirrupJennifer Stirrup on LinkedInJennifer Stirrup on XData RelishHow AI HappensSama
00:35:16
Sep 12, 2023
Joining us today to provide insight on how to put together a credible AI solutions team is Mike Demissie, Managing Director of the AI Hub at BNY Mellon. We talk with Mike about what to consider when putting together and managing such a diverse team and how BNY Mellon is implementing powerful AI and ML capabilities to solve the problems that matter most to their clients and employees. To learn how BNY Mellon is continually innovating for the benefit of their customers and their employees, along with Mike’s thoughts on the future of generative AI, be sure to tune in! Key Points From This Episode:Mike’s background in engineering and his role at BNY Mellon.The history of BNY Mellon and how they are applying AI and ML in financial services.An overview of the diverse range of specialists that make up their enterprise AI team.Making it easier for their organization to tap into AI capabilities responsibly.Identifying the problems that matter most to their clients and employees.Finding the best ways to build solutions and deploy them in a scalable fashion.Insight into the AI solutions currently being implemented by BNY Mellon.How their enterprise AI team chooses what to prioritize and why it can be so challenging.The value of having a diverse set of use cases: it builds confidence and awareness.Their internal PR strategy for educating the rest of the organization on AI implementations.Insight into generative AI's potential to enhance BNY Mellon’s products and services.Ensuring the proper guardrails and regulations are put in place for generative AI.Mike’s advice on pursuing a career in the AI, ML, and data science space.Quotes:“Building AI solutions is very much a team sport. So you need experts across many disciplines.” —Mike Demissie [0:06:40]“The engineers need to really find a way in terms of ‘okay, look, how are we going to stitch together the various applications to run it in the most optimal way?’” —Mike Demissie [0:09:23]“It is not only opportunity identification, but also developing the solution and deploying it and making sure there's a sustainable model to take care of afterwards, after production — so you can go after the next new challenge.” —Mike Demissie [0:09:33]“There's endless use of opportunities. And every time we deploy each of these solutions [it] actually sparks ideas and new opportunities in that line of business.” —Mike Demissie [0:11:58]“Not only is it important to raise the level of awareness and education for everyone involved, but you can also tap into the domain expertise of folks, regardless of where they sit in the organization.” —Mike Demissie [0:15:36]“Demystifying, and really just making this abstract capability real for people is an important part of the practice as well.” —Mike Demissie [0:16:10]“Remember, [this] still is day one. As much as all the talk that is out there, we're still figuring out the best way to navigate and the best way to apply this capability. So continue to explore that, too.” —Mike Demissie [0:24:21]Links Mentioned in Today’s Episode:Mike Demissie on LinkedInBNY MellonHow AI HappensSama
00:26:53
Aug 31, 2023
Mercedes-Benz is a juggernaut in the automobile industry and in recent times, it has been deliberate in advancing the use of AI throughout the organization. Today, we welcome to the show the Executive Manager for AI at Mercedes-Benz, Alex Dogariu. Alex explains his role at the company, he tells us how realistic chatbots need to be, how he and his team measure the accuracy of their AI programs, and why people should be given more access to AI and time to play around with it. Tune in for a breakdown of Alex's principles for the responsible use of AI. Key Points From This Episode:A warm welcome to the Executive Manager for AI at Mercedes-Benz, Alex Dogariu.Alex’s professional background and how he ended up at Mercedes-Benz.When Mercedes-Benz decided that it needed a team dedicated to AI.An example of the output of descriptive analytics as a result of machine learning at Mercedes.Alex explains his role as Executive Manager for AI. How realistic chatbots need to be, according to Alex. The way he measures the accuracy of his AI programs. How Mercedes-Benz assigns AI teams to specific departments within the organization. Why it’s important to give people access to AI technology and allow them to play with it. Using vendors versus doing everything in-house. Alex gives us a brief breakdown of his principles for the responsible use of AI.What he was trying to express and accomplish with his TEDx talk. Tweetables:“[Chatbots] are useful helpers, they’re not replacing humans.” — Alex Dogariu [09:38]“This [AI] technology is so new that we really just have to give people access to it and let them play with it.” — Alex Dogariu [15:50]“I want to make people aware that AI has not only benefits but also downsides, and we should account for those. And also, that we use AI in a responsible way and manner.” — Alex Dogariu [25:12]“It’s always a balancing act. It’s the same with certification of AI models — you don’t want to stifle innovation with legislation and laws and compliance rules but, to a certain extent, it’s necessary, it makes sense.” — Alex Dogariu [26:14]“To all the AI enthusiasts out there, keep going, and let’s make it a better world with this new technology.” — Alex Dogariu [27:00]Links Mentioned in Today’s Episode:Alex Dogariu on LinkedInMercedes-Benz‘Principles for responsible use of AI | Alex Dogariu | TEDxWHU’How AI HappensSama
00:27:45
Aug 29, 2023
Tarun dives into the game-changing components of Watsonx, before delivering some noteworthy advice for those who are eager to forge a career in AI and machine learning. Key Points From This Episode:Introducing Tarun Chopra and a brief look at his professional background. His intellectual diet: what Tarun is consuming to stay up to date with technological trends. Common challenges in technology and AI that he encounters daily. The importance of fully understating what problem you want your new technology to solve. IBM’s role in AI and how the company is helping to accelerate change in the space.Exploring IBM’s decision to remove facial recognition from its endeavors in biometrics. The development of IBM’s Watsonx and how it’s helping business tell their unique AI stories. Why IBM’s consultative approach to introducing their customers to AI is so effective. Tarun’s thoughts on computer power and all related costs. Diving deeper into the three components of Watsonx. Our guest’s words of advice to those looking to forge a career in AI and ML. Tweetables:“One of the first things I tell clients is, ‘If you don’t know what problems we are solving, then we’re on the wrong path.’” — @tc20640n [05:14]“A lot of our customers have adopted AI — but if the workflow is, let’s say 10 steps, they have applied AI to only one or two steps. They don’t get to realize the full value of that innovation.” — @tc20640n [05:24]“Every client that I talk to, they’re all looking to build their own unique story; their own unique point of view with their own unique data and their own unique customer pain points. So, I look at Watsonx as a vehicle to help customers build their own unique AI story.” — @tc20640n [14:16]“The most important thing you need is curiosity. [And] be strong-hearted, because this [industry] is not for the weak-hearted.” — @tc20640n [27:41]Links Mentioned in Today’s Episode:Tarun ChopraTarun Chopra on LinkedInTarun Chopra on TwitterTarun Chopra on IBMIBMIBM WatsonHow AI HappensSama
00:29:32
Aug 17, 2023
Creating AI workflows can be a challenging process. And while purchasing these types of technologies may be straightforward, implementing them across multiple teams is often anything but. That’s where a company like Veritone can offer unparalleled support. With over 400 AI engines on their platform, they’ve created a unique operating system that helps companies orchestrate AI workflows with ease and efficacy. Chris discusses the differences between legacy and generative AI, how LLMs have transformed chatbots, and what you can do to identify potential AI use cases within an organization. AI innovations are taking place at a remarkable pace and companies are feeling the pressure to innovate or be left behind, so tune in to learn more about AI applications in business and how you can revolutionize your workflow!Key Points From This Episode:An introduction to Chris Doe, Product Management Leader at Veritone.How Veritone is helping clients orchestrate their AI workflows.The four verticals Chris oversees: media, entertainment, sports, and advertising.Building solutions that infuse AI from beginning to end.An overview of the type of AI that Veritone is infusing.How they are helping their clients navigate the expansive landscape of cognitive engines.Fine-tuning generative AI to be use-case-specific for their clients.Why now is the time to be testing and defining proof of concept for generative AI.How LLMs have transformed chatbots to be significantly more sophisticated.Creating bespoke chatbots for clients that can navigate complex enterprise applications.The most common challenges clients face when it comes to integrating AI applications.Chris’s advice on taking stock of an organization and figuring out where to apply AI.Tips on how to identify potential AI use cases within an organization.Quotes:“Anybody who's writing text can leverage generative AI models to make their output better.” — @chris_doe [0:05:32]“With large language models, they've basically given these chatbots a whole new life.” — @chris_doe [0:12:38]“I can foresee a scenario where most enterprise applications will have an LLM power chatbot in their UI.” — @chris_doe [0:13:31]“It's easy to buy technology, it's hard to get it adopted across multiple teams that are all moving in different directions and speeds.” — @chris_doe [0:21:16]“People can start new companies and innovate very quickly these days. And the same has to be true for large companies. They can't just sit on their existing product set. They always have to be innovating.” — @chris_doe [0:23:05]“We just have to identify the most problematic part of that workflow and then solve it.” — @chris_doe [0:26:20]Links Mentioned in Today’s Episode:Chris Doe on LinkedInChris Doe on XVeritoneHow AI HappensSama
00:28:38
Aug 11, 2023
AI is an incredible tool that has allowed us to evolve into more efficient human beings. But, the lack of ethical and responsible design in AI can lead to a level of detachment from real people and authenticity. A wonderful technology strategist at Microsoft, Valeria Sadovykh, joins us today on How AI Happens. Valeria discusses why she is concerned about AI tools that assist users in decision-making, the responsibility she feels these companies hold, and the importance of innovation. We delve into common challenges these companies face in people, processes, and technology before exploring the effects of the democratization of AI. Finally, our guest shares her passion for emotional AI and tells us why that keeps her in the space. To hear it all, tune in now!Key Points From This Episode:An introduction to today’s guest, Valeria Sadovykh. Valeria tells us about her studies at the University of Auckland and her Ph.D. The problems with using the internet to assist in decision making. How ethical and responsible AI frames Valeria’s career. What she is doing to encourage AI leaders to prioritize responsible design. The dangers of lack of authenticity, creativity, and emotion in AI. Whether we need human interaction or not and if we want to preserve it. What responsibility companies developing this technology have, according to Valeria. She tells us about her job at Microsoft and what large organizations are doing to be ethical. What kinds of AI organizations need to be most conscious of ethics and responsible design.Other common challenges companies face when they plug in other technology.How those challenges show up in people, processes, and technology when deploying AI.Why Valeria expects some costs to decrease as AI technology democratizes over time.The importance of innovating and being prepared to (potentially) fail. Why the future of emotional AI and the ability to be authentic fascinates Valeria. Tweetables:“We have no opportunity to learn something new outside of our predetermined environment.” — @ValeriaSadovykh [0:07:07]“[Ethics] as a concept is very difficult to understand because what is ethical for me might not necessarily be ethical for you and vice versa.” — @ValeriaSadovykh [0:11:38]“Ethics – should not come – [in] place of innovation.” — @ValeriaSadovykh [0:20:13]“Not following up, not investing, not trying, [and] not failing is also preventing you from success.” — @ValeriaSadovykh [0:29:52]Links Mentioned in Today’s Episode:Valeria Sadovykh on LinkedInValeria Sadovykh on InstagramValeria Sadovykh on TwitterHow AI HappensSama
00:34:23
Aug 09, 2023
Key Points From This Episode:She shares her professional journey that eventually led to the founding of Gradient Ventures.How Anna would contrast AI Winter to the standard hype cycles that exist.Her thoughts on how the web and mobile sectors were under-hyped.Those who decide if something falls out of favor; according to Anna.How Anna navigates hype cycles.Her process for evaluating early-stage AI companies. How to assess whether someone is a tourist or truly committed to something.Approaching problems and discerning whether AI is the right answer.Her thoughts on the best application for AI or MLR technology. Anna shares why she is excited about large language models (LLMs).Thoughts on LLMs and whether we should or can we approach AGIs.A discussion: do we limit machines when we teach them to speak the way we speak?Quality AI and navigating fairness: the concept of the Human in the Loop.Boring but essential data tasks: whose job is that?How she feels about sensationalism. What gets her fired up when it is time to support new companies. Advice to those forging careers in the AI and ML space. Tweetables:“When that hype cycle happens, where it is overhyped and falls out of favor, then generally that is – what is called a winter.” — @AnnapPatterson [0:03:28]“No matter how hyped you think AI is now, I think we are underestimating its change.” — @AnnapPatterson [0:04:06]“When there is a lot of hype and then not as many breakthroughs or not as many applications that people think are transformational, then it starts to go through a winter.” — @AnnapPatterson [0:04:47]@AnnapPatterson [0:25:17]Links Mentioned in Today’s Episode:Anna Patterson on LinkedIn‘Eight critical approaches to LLMs’‘The next programming language is English’‘The Advice Taker’GradientHow AI HappensSama
00:26:09
Jul 28, 2023
Wayfair uses AI and machine learning (ML) technology to interpret what its customers want, connect them with products nearby, and ensure that the products they see online look and feel the same as the ones that ultimately arrive in their homes. With a background in engineering and a passion for all things STEM, Wayfair’s Director of Machine Learning, Tulia Plumettaz, is an innate problem-solver. In this episode, she offers some insight into Wayfair’s ML-driven decision-making processes, how they implement AI and ML for preventative problem-solving and predictive maintenance, and how they use data enrichment and customization to help customers navigate the inspirational (and sometimes overwhelming) world of home decor. We also discuss the culture of experimentation at Wayfair and Tulia’s advice for those looking to build a career in machine learning.Key Points From This Episode:A look at Tulia’s engineering background and how she ended up in this role at Wayfair.Defining operations research and examples of its real-life applications.What it means for something to be strategy-proof.Different ways that AI and ML are being integrated at Wayfair.The challenge of unstructured data and how Wayfair takes the onus off suppliers.Wayfair’s North Star: detecting anomalies before they’re exposed to customers.Preventative problem-solving and how Wayfair trains ML models to “see around corners.”Examples of nuanced outlier detection and whether or not ML applications would be suitable.Insight into Wayfair’s bespoke search tool and how it interprets customers’ needs.The exploit-and-explore model Wayfair uses to measure success and improve accordingly.Tulia’s advice for those forging a career in machine learning: go back to first principles!Tweetables:“[Operations research is] a very broad field at the intersection between mathematics, computer science, and economics that [applies these toolkits] to solve real-life applications.” — Tulia Plumettaz [0:03:42]“All the decision making, from which channel should I bring you in [with] to how do I bring you back if you’re taking your sweet time to make a decision to what we show you when you [visit our site], it’s all [machine learning]-driven.” — Tulia Plumettaz [0:09:58]“We want to be in a place [where], as early as possible, before problems are even exposed to our customers, we’re able to detect them.” — Tulia Plumettaz [0:18:26]“We have the challenge of making you buy something that you would traditionally feel, sit [on], and touch virtually, from the comfort of your sofa. How do we do that? [Through the] enrichment of information.” — Tulia Plumettaz [0:29:05]“We knew that making it easier to navigate this very inspirational space was going to require customization.” — Tulia Plumettaz [0:29:39]“At its core, it’s an exploit-and-explore process with a lot of hypothesis testing. Testing is at the core of [Wayfair] being able to say: this new version is better than [the previous] version.” — Tulia Plumettaz [0:31:53]Links Mentioned in Today’s Episode:Tulia Plumettaz on LinkedInWayfairHow AI HappensSama
00:34:30
Jul 19, 2023
Bob highlights the importance of building interdepartmental relationships and growing a talented team of problem solvers, as well as the key role of continuous education. He also offers some insight into the technical and not-so-technical skills of a “data science champion,” tips for building adaptable data infrastructures, and the best career advice he has ever received, plus so much more. For an insider’s look at the data science operation at Freewheel and valuable advice from an analytics leader with more than two decades of experience, be sure to tune in today!Key Points From This Episode:A high-level overview of Freewheel, Bob’s role there, and his career trajectory thus far.Important intersections between data science and the organization at large.Three indicators that Freewheel is a data-driven company.Why continuous education is a key component for agile data science teams.The interplay between data science and the development of AI technology.Technical (and other) skills that Bob looks for when recruiting new talent to his team.Bob’s perspective on the value of interdepartmental collaboration.Insight into what an adaptable data infrastructure looks like.The importance of asking yourself, “What more can we do?”Tweetables:“As a data science team, it’s not enough to be able to solve quantitative problems. You have to establish connections to the company in a way that uncovers those problems to begin with.” — @Bob_Bress [0:06:42]“The more we can do to educate folks – on the type of work that the [data science] team does, the better the position we are in to tackle more interesting problems and innovate around new ideas and concepts.” — @Bob_Bress [0:09:49]“There are so many interactions and dependencies across any project of sufficient complexity that it’s only through [collaboration] across teams that you’re going to be able to hone in on the right answer.” — @Bob_Bress [0:17:34]“There is always more you can do to enhance the work you’re doing, other questions you can ask, other ways you can go beyond just checking a box.” — @Bob_Bress [0:23:31]Links Mentioned in Today’s Episode:Bob Bress on LinkedInBob Bress on TwitterFreewheelHow AI HappensSama
00:25:33
Jul 12, 2023
Low-code platforms provide a powerful and efficient way to develop applications and drive digital transformation and are becoming popular tools for organizations. In today’s episode, we are joined by Piero Molino, the CEO, and Co-Founder at Predibase, a company revolutionizing the field of machine learning by pioneering a low-code declarative approach. Predibase empowers engineers and data scientists to effortlessly construct, enhance, and implement cutting-edge models, ranging from linear regressions to expansive language models, using a mere handful of code lines. Piero is intrigued by the convergence of diverse cultural interests and finds great fascination in exploring the intricate ties between knowledge, language, and learning. His approach involves seeking unconventional solutions to problems and embracing a multidisciplinary approach that allows him to acquire novel and varied knowledge while gaining fresh experiences. In our conversation, we talk about his professional career journey, developing Ludwig, and how this eventually developed into Predibase. Key Points From This Episode:Background about Piero’s professional experience and skill sets.What his responsibilities were in his previous role at Uber.Hear about his research at Stanford University.Details about the motivation for Predibase: Ludwig AI. Examples of the different Ludwig models and applications.Challenges of software development.How the community further developed his Ludwig machine learning tool.The benefits of community involvement for developers.Hear how his Ludwig project developed into Predibase.He shares the inspiration behind the name Ludwig.Why Predibase can be considered a low-code platform.What the Predibase platform offers users and organizations.Ethical considerations of democratizing data science tools.The importance of a multidisciplinary approach to developing AI tools.Advice for upcoming developers.Tweetables:“One thing that I am proud of is the fact that the architecture is very extensible and really easy to plug and play new data types or new models.” — @w4nderlus7 [0:14:02]“We are doing a bunch of things at Predibase that build on top of Ludwig and make it available and easy to use for organizations in the cloud.” — @w4nderlus7 [0:19:23]“I believe that in the teams that actually put machine learning into production, there should be a combination of different skill sets.” — @w4nderlus7 [0:23:04]“What made it possible for me to do the things that I have done is constant curiosity.” — @w4nderlus7 [0:26:06]Links Mentioned in Today’s Episode:Piero Molino on LinkedInPiero Molino on TwitterPredibaseLudwigMax-Planck-InstituteLoopr AIWittgenstein's MistressHow AI HappensSama
00:28:03
Jun 30, 2023
dRisk uses a unique approach to increasing AV safety: collecting real-life scenarios and data from accidents, insurance reports, and more to train autonomous vehicles on extreme edge cases. With their advanced simulation tool, they can accurately recreate and test these scenarios, allowing AV developers to improve the performance and safety of their vehicles. Join us as Chess and Rav delve into the exciting world of AVs and the challenges they face in creating safer and more efficient transportation systems.Key Points From This Episode:Introducing dRisk Founder and CEO, Chess Stetson, and COO, Rav Babbra.dRisk’s mission to help autonomous vehicles become better drivers than humans.The UK government’s interest in autonomous vehicles to solve transportation problems.Rav’s career background; how the CAVSim competition put dRisk on his radar.How dRisk’s software presents real-life scenarios and extreme edge cases to test AVs.Chess defines extreme edge cases in the AV realm and explains where AVs typically go wrong.How the company uses natural language processing and AI-based techniques to improve simulation accuracy for AV testing.The metrics used to ensure the accuracy of the simulations.What makes AI different from humans in an AV context.The benchmark for the capability of AVs; the tolerance for human driver error versus AV error.Why third-party testing is a necessity for AI.dRisk’s assessment process for autonomous vehicles.The delicate balance between innovation and regulation.Examples of AV edge cases.Tweetables:“At the time, no autonomous vehicles could ever actually drive on the UK's roads. And that's where Chess and the team at dRisk have done such great piece of work.” — Rav Babbra [0:07:25]“If you've got an unprotected cross-traffic turn, that's where a lot of things traditionally go wrong with AVs.” —Chess Stetson [0:08:45]“We can, in an automated way, map out metrics for what might or might not constitute a good test and cut out things that would be something like a hallucination.” —Chess Stetson [0:13:59]“The thing that makes AI different than humans is that if you have a good driver's test for an AI, it's also a good training environment for an AI. That's different [from] humans because humans have common sense.” — Chess Stetson [0:15:10]“If you can really rigorously test [AI] on its ability to have common sense, you can also train it to have a certain amount of common sense.” — Chess Stetson [0:15:51]“The difference between an AI and a human is that if you had a good test, it's equivalent to a good training environment.” — Chess Stetson [0:16:29]“I personally think it's not unrealistic to imagine AV is getting so good that there's never a death on the road at all.” — Chess Stetson [0:18:50]“One of the reasons that we're in the UK is precisely because the UK is going to have no tolerance for autonomous vehicle collisions.” — Chess Stetson [0:20:08]“Now, there's never a cow in the highway here in the UK, but of course, things do fall off lorries. So if we can train against a cow sitting on the highway, then the next time a grand piano falls off the back of a truck, we've got some training data at least that helps it avoid that.” — Rav Babbra [0:35:12]“If you target the worst case scenario, everything underneath, you've been able to capture and deal with.” — Rav Babbra [0:36:08]Links Mentioned in Today’s Episode:Chess StetsonChess Stetson on LinkedInRav Babbra on LinkedIndRISKHow AI HappensSama
00:35:19
Jun 15, 2023
In this episode, we learn about the common challenges companies face when it comes to developing and deploying their AV and how Stantec uses military and aviation best practices to remove human error and ensure safety and reliability in AV operations. Corey explains the importance of collecting edge cases and shares his take on why the autonomous mobility industry is so meaningful. Key Points From This Episode:Introducing Autonomous Mobility Strategist and Stantec GenerationAV Founder Corey Clothier.Corey breaks down his typical week.Applications for autonomously mobile wheelchairs.Corey’s experience working in robotics for the Department of Defense.The state of autonomy back in 2009 and 2010.Corey’s definition of commercialization.Why there’s less forgiveness for downtime with autonomous vehicles than human-operated vehicles.How people’s attitudes around autonomy and robotics differ in different parts of the world.The sensationalism around autonomous vehicle “crashes.”Stantec’s approach to measuring and assessing the safety and risk of autonomous vehicles. Why it’s so crucial to collect edge cases and how solving for them is applied downstream.The common challenges companies face when it comes to deploying and developing their AV.How Stantec uses military and aviation best practices to remove human error in AV operations.The advantages of and opportunities behind AV.Advice for those hoping to forge an impactful career in autonomous vehicles.Tweetables:“For me, [commercialization] is a safe and reliable service that actually can perform the job that it's supposed to.” — @coreyclothier [0:07:04]“Most of the autonomous vehicles that I've been working with, even since the beginning, most of them are pretty safe.” — @coreyclothier [0:08:01]“When you start to talk to people from around the world, they absolutely have different attitudes related to autonomy and robotics.” — @coreyclothier [0:09:20]“What's exciting though is about dRISK [is] it gives us a quantifiable risk measure, something that we can look at as a baseline and then something we can see as we make improvements and do mitigation strategies.” — @coreyclothier [0:17:18]“The common challenges really are being able to handle all the edge cases in the operating environment that they're going to deploy.” — @coreyclothier [0:20:41] Links Mentioned in Today’s Episode:Corey Clothier on LinkedInCorey Clothier on TwitterStantecdRISKHow AI HappensSama
00:26:46
May 11, 2023
Vishnu provides valuable advice for data scientists who want to help create high-quality data that can be used effectively to impact business outcomes. Tune in to gain insights from Vishnu's extensive experience in engineering leadership and data technologies.Key Points From This Episode:An introduction to Vishnu Ram, his background, and how he came to Credit Karma. His prior exposure to AI in the form of fuzzy logic and neural networks.What Credit Karma needed to do before the injection of AI into its data functions. The journey of building Credit Karma into the data science operation that it is. Challenges of building the models in time so the data isn’t outdated by the time it can be used.The nature of technical debtHow compensating for technical debt with people or processes is different from normal business growth.The current data culture of Credit Karma.Some pros and cons of a multi-team approach when introducing new platforms or frameworks.The process of adopting TensorFlow and injecting it in a meaningful way.How they mapped the need for this new model to a business use case and the internal education that was needed to make this change. Insight into the shift from being an individual contributor into a management position with organization-wide challenges.Advice to data scientists wanting to help to create a data culture that results in clean, usable, high-quality data.Tweetables:“One of the things that we always care about [at Credit Karma] is making sure that when you are recommending any financial products in front of the users, we provide them with a sense of certainty.” — Vishnu Ram [0:05:59]“One of the big things that we had to do, pretty much right off the bat, was make sure that our data scientists were able to get access to the data at scale — and be able to build the models in time so that the model maps to the future and performs well for the future.” — Vishnu Ram [0:08:00]“Whenever we want to introduce new platforms or frameworks, both the teams that own that framework as well as the teams that are going to use that framework or platform would work together to build it up from scratch.” — Vishnu Ram [0:15:11]“If your consumers have done their own research, it’s a no-brainer to start including them because they’re going to help you see around the corner and make sure you're making the right decisions at the right time.” — Vishnu Ram [0:16:43]Links Mentioned in Today’s Episode:Vishnu RamCredit KarmaTensorFlowTFX: A TensorFlow-Based Production-Scale Machine Learning Platform [19:15] How AI HappensSama
00:31:28
May 04, 2023
Algolia is an AI-powered search and discovery platform that helps businesses deliver fast, personalized search experiences. In our conversation, Sean shares what ignited his passion for AI and how Algolia is using AI to deliver lightning-fast custom search results to each user. He explains how Algolia's AI algorithms learn from user behavior and talks about the challenges and opportunities of implementing AI in search and discovery processes. We discuss improving the user experience through AI, why technologies like ChatGPT are disrupting the market, and how Algolia is providing innovative solutions. Learn about “hashing,” the difference between keyword and vector searches, the company’s approach to ranking, and much more. Key Points From This Episode:Learn about Sean’s professional journey and previous experience working with AI and e-commerce.Discover why Sean is so passionate about the technology industry and how he was able to see gaps within the e-commerce user experience.Gain insights into the challenges currently facing search engines and why it's not just about how you ask the search engine but also about how it responds.Get an overview of how Algolia's search algorithm differs from the rest and how it trains results on context to deliver lightning-fast, relevant results.Learn about the problems with vectors and how Algolia is using AI to revolutionize the search and discovery process.Sean explains Algolia's approach to ranking search results and shares details about Algolia's new decompression algorithm.Discover how Algolia's breakthroughs were inspired by different fields like biology and the problems facing search engine optimization for the e-commerce sector.Find out when users can expect to see Algolia's approach to search outside of the e-commerce experience.Tweetables:“Well, the great thing is that every 10 years the entire technology industry changes, so there is never a shortage of new technology to learn and new things to build.” — Sean Mullaney [0:05:08]“It is not just the way that you ask the search engine the question, it is also the way the search engine responds regarding search optimization.” — Sean Mullaney [0:08:04]Links Mentioned in Today’s Episode:Sean Mullaney on LinkedInAlgoliaChatGPTHow AI HappensSama
00:35:32
Apr 13, 2023
Today’s guest is a Developer Advocate and Machine Learning Growth Engineer at Roboflow who has the pleasure of providing Roboflow users with all the information they need to use computer vision products optimally. In this episode, Piotr shares an overview of his educational and career trajectory to date; from starting out as a civil engineering graduate to founding an open source project that was way ahead of its time to breaking the million reader milestone on Medium. We also discuss Meta’s Segment Anything Model, the value of packaged models over non-packaged ones, and how computer vision models are becoming more accessible. Key Points From This Episode:What Piotr’s current roles at Roboflow entail.An overview of Piotr’s educational and career journey to date.The Medium milestone that Piotr recently achieved.The motivation behind Piotr’s open source project, Make Sense (and the impact it has had). Piotr’s approach to assessing computer vision models. The issue of lack of support in the computer vision space. Why Piotr is an advocate of packaged models. What makes Meta’s Segment Anything Model so novel and exciting. An example that highlights how computer vision models are becoming more accessible. Piotr’s thoughts about the future potential of ChatGPT.Tweetables:“Not only [do] I showcase [computer vision] models but I also show people how to use them to solve some frequent problems.” — Piotr Skalski [0:10:14]“I am always a fan of models that are packaged.” — Piotr Skalski [0:15:58]“We are drifting towards a direction where users of those models will not necessarily have to be very good at computer vision to use them and create complicated things.” — Piotr Skalski [0:32:15]Links Mentioned in Today’s Episode:Piotr Skalski on LinkedInPiotr Skalski on MediumMake SenseRoboflowSegment Anything by Meta AIHow to Use the Segment Anything ModelHow AI HappensSama
00:37:31
Mar 30, 2023
In our conversation, we learn about her professional journey and how this led to her working at DataRobot, what she realized was missing from the DataRobot platform, and what she did to fill the gap. We discuss the importance of bias in AI models, approaches to mitigate models against bias, and why incorporating ethics into AI development is essential. We also delve into the different perspectives of ethical AI, the elements of trust, what ethical “guard rails” are, and the governance side of AI. Key Points From This Episode:Dr. Mahmoudian shares her professional background and her interest in AI.How Dr. Mahmoudian became interested in AI ethics and building trustworthy AI.What she hopes to achieve with her work and research. Hear practical examples of how to build ethical and trustworthy AI.We unpack the ethical and trustworthy aspects of AI development.What the elements of trust are and how to implement them into a system.An overview of the different essential processes that must be included in a model.How to mitigate systems from bias and the role of monitoring.Why continual improvement is key to ethical AI development.Find out more about DataRobot and Dr. Mahmoudian’s multiple roles at the company.She explains her approach to working with customers.Discover simple steps to begin practicing responsible AI development.Tweetables:“When we talk about ‘guard rails’ sometimes you can think of the best practice type of ‘guard rails’ in data science but we should also expand it to the governance and ethics side of it.” — @HaniyehMah [0:11:03]“Ethics should be included as part of [trust] to truly be able to think about trusting a system.” — @HaniyehMah [0:13:15]“[I think of] ethics as a sub-category but in a broader term of trust within a system.” — @HaniyehMah [0:14:32]“So depending on the [user] persona, we would need to think about what kind of [system] features we would have .” — @HaniyehMah [0:17:25]Links Mentioned in Today’s Episode:Haniyeh Mahmoudian on LinkedInHaniyeh Mahmoudian on TwitterDataRobotNational AI Advisory CommitteeHow AI HappensSama
00:32:25
Mar 16, 2023
Kristen is also the founder of Data Moves Me, a company that offers courses, live training, and career development. She hosts The Cool Data Projects Show, where she interviews AI, machine learning (ML), and deep learning (DL) experts about their projects. Points From This Episode:Kristen’s background in the data science world and what led her to her role at Comet.What it means to be a developer advocate and build community.Some of the coolest AI, ML, and DL ideas from The Cool Data Projects Show!One of the computer vision projects Kristen is working on that uses Kaggle datasets.How Roboflow can help you deploy a computer vision model in an afternoon.The amount of data that is actually needed for object detection.Solving the challenge of contextualization for computer vision models.A look at attention mechanisms in explainable AI and how to tackle large datasets.Insight into the motivations behind Kristen’s school bus project.The value of learning through building and solving “real” problems.How Kristen’s background as a data scientist lends itself to computer vision.Free and easily-available resources that others in the space have created to assist you.Advice for those forging their own careers: get involved in the community!Tweetables:“I’m finding people who are working on really cool things and focusing on the methodology and approach. I want to know: how did you collect your data? What algorithm are you using? What algorithms did you consider? What were the challenges that you faced?” — @DataMovesHer [0:05:55]“A lot of times, it comes back to [the fact that] more data is always better!” — @DataMovesHer [0:15:40]“I like [to do computer vision] projects that allow me to solve a problem that is actually going on in my life. When I do one, suddenly, it becomes a lot easier to see other ways that I can make other parts of my life easier.” — @DataMovesHer [0:18:59]“The best thing you can do is to get involved in the community. It doesn’t matter whether that community is on Reddit, Slack, or LinkedIn.” — @DataMovesHer [0:23:32]Links Mentioned in Today’s Episode:Data Moves MeCometThe Cool Data Projects ShowMothers of Data ScienceKristen Kehrer on LinkedInKristen Kehrer on TwitterKristen Kehrer on InstagramKristen Kehrer on YouTubeKristen Kehrer on TikTokKaggleRoboflowKangas LibraryHow AI HappensSama
00:26:09
Mar 01, 2023
In this episode, we learn the benefits of blue-collar AI education and the role of company culture in employee empowerment. Dr. Borne shares the history of data collection and analysis in astronomy and the evolution of cookies on the internet and explains the concept of Web3 and the future of data ownership. Dr. Borne is of the opinion that AI serves to amplify and assist people in their jobs rather than replace them and in our conversation, we discover how everyone can benefit if adequately informed.Key Points From This Episode:Data scientist and astrophysicist, Dr. Kirk Borne’s vast background.The history of data collection and analysis in astronomy.How Dr. Borne fulfills his passion for educating others.DataPrime’s blue-collar AI education course.How AI amplifies your work without replacing it.The difference between efficiency and effectiveness.The difference between educating blue-collar students and graduate students.The goal of the blue-collar AI course. The ways in which automation and digital transformation are changing jobs.Comparison between the AI revolution (the fourth industrial revolution) and previous industrial revolutions.The role of company culture in employee empowerment.Dr. Borne’s approach to teaching AI education.Dr. Borne shares a humorous Richard Feynman anecdote.The concept of Web3 and the future of data ownership.The history and evolution of cookies on the internet.The ethical concerns of AI.Tweetables:“[AI] amplifies and assists you in your work. It helps automate certain aspects of your work but it’s not really taking your work away. It’s just making it more efficient, or more effective.” — @KirkDBorne [0:11:18]“There’s a difference between efficiency and effectiveness … Efficiency is the speed at which you get something done and effective means the amount that you can get done.” — @KirkDBorne [0:11:29]“There are different ways that automation and digital transformation are changing a lot of jobs. Not just the high-end professional jobs, so to speak, but the blue-collar gentlemen.” — @KirkDBorne [0:18:06]“What we’re trying to achieve with this blue-collar AI is for people to feel confident with it and to see where it can bring benefits to their business.” — @KirkDBorne [0:24:08]“I have yet to see an auto-complete come over your phone and take over the world.” — @KirkDBorne [0:26:56]Links Mentioned in Today’s Episode:Kirk Borne, Ph.D.Kirk Borne, Ph.D. on LinkedInKirk Borne, Ph.D. on TwitterRichard FeynmanJennyCoAlchemy ExchangeBooz Allen HamiltonDataPrimeHow AI HappensSama
00:35:49
Feb 23, 2023
Goodbye Passwords, Hello Biometrics with George WilliamsEpisode 61: Show Notes.Is it really safer to have a system know your biometrics rather than your password? If so, who do you trust with this data? George Williams, a silicon valley tech veteran who most recently served as Head of AI at SmileIdentity, is passionate about machine learning, mathematics, and data science. In this episode, George shares his opinions on the dawn of AI, how long he believes AI has been around, and references the ancient Greeks to show the relationship between the current fifth big wave of AI and the genesis of it all. Focusing on the work done by SmileIdentity, you will understand the growth of AI in Africa, what and how biometrics works, and the mathematical vulnerabilities in machine learning. Biometrics is substantially more complex than password authentication, and George explains why he believes this is the way of the future.Key Points From This Episode:Georges's opinions on the genesis of AI.The link between robotics and AI.The technology and ideas of the Ancient Greeks, in the time of Aristotle.George’s career past: software engineer versus mathematics.What George’s role is within SmileIdentity.How Africa is skipping passwords and going into advanced biometrics.How George uses biometrics in his everyday life,Quantum supremacy: how it works and its implications.George’s opinions on conspiracy theories about the government having personal information.Why understanding the laws and regulations of technology is important.The challenges of data security and privacy.Some ethical, unbiased questions about biometrics, mass surveillance, and AI.George explains ‘garbage in, garbage out’ and how it relates to machine learning.How SmileIdentity is ensuring ethnic diversity and accuracy.How to measure an unbiased algorithm.Why machine learning is a life cycle. The fraud detection technology in SmileIdentity biometric security.The shift of focus in machine learning and cyber security.Tweetables:“Robotics and artificial intelligence are very much intertwined.” — @georgewilliams [0:02:14]“In my daily routine, I leverage biometrics as much as possible and I prefer this over passwords when I can do so.” — @georgewilliams [0:08:13]“All of your data is already out there in one form or another.” — @georgewilliams [0:10:38]“We don’t all need to be software developers or ML engineers, but we all have to understand the technology that is powering [the world] and we have to ask the right questions.” — @georgewilliams [0:11:53]“[Some of the biometric] technology is imperfect in ways that make me uncomfortable and this technology is being deployed at massive scale in parts of the world and that should be a concern for all of us.” — @georgewilliams [0:20:33]“In machine learning, once you train a model and deploy it you are not done. That is the start of the life cycle of activity that you have to maintain and sustain in order to have really good AI biometrics.” — @georgewilliams [0:22:06]Links Mentioned in Today’s Episode:George Williams on TwitterGeorge Williams on LinkedInSmileIdentityNYU Movement LabChatGPTHow AI HappensSama
00:33:25
Dec 15, 2022
Our discussion today dives into the climate change related applications of AI and machine learning, and how organizations are working towards mobilizing them to address the climate problem. Priya shares her thoughts on advanced technology and creating a dystopian version of humanity, what made her decide on her Ph.D. topic, and what she learned touring the world interviewing power grid experts around the world.Key Points From This Episode:Priya shares her take on ChatGPT.We talk about ChatGPT guard rails and whether it should be done manually or with built in technology that automatically detects issues.Concerns with the concept of advanced technology and creating a dystopian version of humanity. What made Priya want to get into her particular Ph.D. topic.What surprised her about her tour around the world interviewing people. Priya explains what she means by a 'systems problem.'Machine learning and AI in power grids; what is the thrift of opportunity?Priya speaks to the reason why she found a climate change AI organization.Narrowing the focus, in AI and Climate Change, as an organization.Priya shares an example of what work looks like for someone in a role with machine learning and climate change. Recent wins in the climate change world and how they measure the success of their progress. The gap between the vision of where she is now and where she wants to be in the medium term. Tweetables:“When we are working on climate change related problems, even ones that are “technical problems” every problem is basically a socio-political technical problem, and really understanding that context when we move that forward can be really important.” — @priyald17 [0:10:02]“Machine learning in power grids and really in a lot of other climate relevance sectors can contribute along several themes or in several ways.” — @priyald17 [0:12:18]“What prompted us to found this organization, Climate Change AI, [is] to really help mobilize the AI machine learning community towards climate action by bringing them together with climate researchers, entrepreneurs, industry, policy, all of these players who are working to address the climate problems and sort of to do that together.” — @priyald17 [0:17:21]Longer quote“So the whole idea of Climate Change AI is rather than just focusing on what can we as individuals who are already in this area do to do research projects or deployment projects in this area, how can we sort of mobilize the broader talent pool and really help them to connect with entities that are really wanting to use their skills for climate action.” — @priyald17 [0:19:17]Links Mentioned in Today’s Episode:Priya DontiPriya Donti on TwitterPutting the Smarts in the Smart GridClimate Change AIClimate Change AI Interactive SummariesHow AI HappensSama
00:30:20