- Data Pragmatist
- Posts
- Understanding How Algorithms Can Go Wrong
Understanding How Algorithms Can Go Wrong
Teenager Wins $250,000 for AI Discovery of Over a Million Space Objects

Welcome to learning edition of the Data Pragmatist, your dose of all things data science and AI.
đź“– Estimated Reading Time: 5 minutes. Missed our previous editions?
🌌 Teenager Wins $250,000 for AI Discovery of Over a Million Space Objects. Link
Matteo Paz, an 18-year-old from Pasadena, California, developed an AI algorithm that identified 1.5 million new objects in space using data from NASA's NEOWISE mission.
His machine-learning model analyzed 200 terabytes of infrared survey data to detect variable objects like supernovae and black holes.
These findings could enhance scientists' understanding of cosmic phenomena, such as the universe's expansion rate.
Paz's catalog, VarWISE, is currently utilized by Caltech researchers and is slated for publication later this year.
⚕️ Study Reveals AI's Shortcomings in Detecting Critical Health Conditions. Link
A recent study published in Nature's Communications Medicine journal found that AI systems missed approximately 66% of life-threatening injuries in hospitalized patients.
Machine learning models trained solely on existing patient data failed to accurately predict patient deterioration.
Researchers emphasized the necessity of understanding the contexts in which these models can make precise decisions.
The study suggests that while large language models like ChatGPT show promise in medical applications, their reliability requires thorough research before clinical use.
There’s a reason 400,000 professionals read this daily.
Join The AI Report, trusted by 400,000+ professionals at Google, Microsoft, and OpenAI. Get daily insights, tools, and strategies to master practical AI skills that drive results.
🧠The Dark Side of Data Science: How Algorithms Can Go Wrong
One of the biggest issues in data science is the use of flawed or biased data. Algorithms are only as good as the data they are trained on. If historical data contains biases, the models built from it will reinforce those biases. For example, AI-powered hiring tools have been shown to discriminate against women because they were trained on male-dominated hiring data. Similarly, facial recognition software has struggled to identify people with darker skin tones accurately, leading to serious ethical concerns.

Unintended Consequences of Automation
Many industries are rapidly adopting AI and machine learning to automate decision-making processes. However, automation does not always lead to fair or accurate outcomes. In healthcare, predictive models have been used to determine which patients get priority for treatments, sometimes favoring wealthier individuals over those in greater need. In finance, credit-scoring algorithms have penalized certain demographics due to biased training data, limiting access to loans and financial services.
AI Failures in Real-World Applications
Some well-known cases highlight the dangers of poorly designed AI:
Microsoft’s AI Chatbot (Tay) – Released in 2016, Tay was trained using Twitter data and quickly started posting offensive and racist content.
Predictive Policing – AI-driven crime prediction tools have been found to disproportionately target minority communities, reinforcing existing inequalities.
Healthcare AI Bias – Some AI models used for diagnosing diseases have been trained on limited datasets, leading to misdiagnosis in underrepresented populations.
Autonomous Vehicles – Self-driving cars have struggled with recognizing pedestrians, especially in low-light conditions, leading to accidents and safety concerns.
The Challenge of Explainability in AI
Many AI models, especially deep learning systems, function as “black boxes,” meaning their decision-making processes are difficult to interpret. This lack of transparency can lead to serious problems when AI-driven decisions impact people’s lives. For instance, if an AI system denies a loan or medical treatment, individuals should have the right to understand why. The lack of explainability makes it harder to detect biases and errors, reducing trust in AI systems.
Preventing Algorithmic Disasters
To minimize the risks associated with AI, organizations must prioritize ethical AI development. This includes using diverse and representative data, implementing fairness audits, and ensuring transparency in AI models. Governments and regulatory bodies also need to establish guidelines for responsible AI usage to prevent unintended harm.
Top 5 AI Tools for Job Interviews
1. HireVue
Use Case: AI-driven video interview platform
Features:
Conducts structured video interviews
AI analyzes candidate responses, facial expressions, and voice tone
Provides insights to recruiters for decision-making
Offers coding assessments for tech roles
2. Pymetrics
Use Case: AI-based soft skills and personality assessment
Features:
Uses neuroscience-based games to evaluate traits like risk-taking, decision-making, and problem-solving
AI matches candidates to roles based on behavioral and cognitive traits
Helps reduce hiring bias
3. MyInterview
Use Case: AI-powered video interviewing with personality insights
Features:
Records candidate video responses to predefined questions
AI assesses tone, speech patterns, and personality traits
Provides a shortlist of top candidates based on compatibility
4. CodeSignal
Use Case: AI-driven technical assessment platform
Features:
Offers live coding interviews with AI-driven evaluation
Uses real-world coding challenges to assess technical skills
AI-generated insights help recruiters identify top performers
5. X0PA AI
Use Case: AI-powered recruitment and interview automation
Features:
Predicts candidate-job fit using machine learning
Automates interview scheduling and assessments
Uses AI-driven data insights to improve hiring decisions
If you are interested in contributing to the newsletter, respond to this email. We are looking for contributions from you — our readers to keep the community alive and going.