Last updated on Dec 26, 2023
- All
- Engineering
- Artificial Intelligence (AI)
Powered by AI and the LinkedIn community
Cisco sponsors Artificial Intelligence (AI) collaborative articles.
Sponsorship does not imply endorsem*nt. LinkedIn's editorial content maintains complete independence.
1
Data quality and bias
2
Model complexity and interpretability
3
Human factors and feedback
Be the first to add your personal experience
4
Ethical and social implications
Be the first to add your personal experience
5
Here’s what else to consider
Artificial intelligence (AI) is the science and engineering of creating machines and systems that can perform tasks that normally require human intelligence, such as learning, reasoning, and decision making. AI algorithms are the rules and procedures that guide the behavior of AI systems, such as neural networks, natural language processing, computer vision, and machine learning. AI algorithms can be used to analyze large amounts of data, recognize patterns, generate predictions, and optimize outcomes. However, AI algorithms also have limitations, especially when it comes to predicting human behavior, which is complex, dynamic, and context-dependent. In this article, we will explore some of the main challenges and limitations of AI algorithms in predicting human behavior, and how they can be addressed or mitigated.
Top experts in this article
Selected by the community from 202 contributions. Learn more
Earn a Community Top Voice badge
Add to collaborative articles to get recognized for your expertise on your profile. Learn more
-
7
- Shawn Tumanov AI Governance Director @ BMO | MBA, CISA, Model Risk
2
- Haileleol Tibebu (PhD) Postdoctoral Fellow at university of Houston | AI Scientist | Responsible AI | AI Policy | AI Governance
2
1 Data quality and bias
One of the key factors that affect the performance and accuracy of AI algorithms is the quality and bias of the data they use. Data quality refers to the completeness, consistency, validity, and reliability of the data, while data bias refers to the distortion or skewness of the data due to various sources, such as sampling, measurement, labeling, or processing errors. Poor data quality and bias can lead to inaccurate or misleading predictions, as well as ethical and social issues, such as discrimination, unfairness, and lack of transparency. To overcome this limitation, AI algorithms need to use high-quality and diverse data sets that represent the target population and context, as well as apply methods and techniques to detect, reduce, and correct data bias.
Help others by sharing more (125 characters min.)
- Doug Hubbard Measure What Matters to Solve Your Single Most Important Decision.
- Report contribution
Thanks for letting us know! You'll no longer see this contribution
When evaluating any new method, technology, or tool, I always ask "Compared to what?" Every limitation of AI I see here is certainly a limitation of humans. Humans have opaque judgement processes. Humans have data quality and bias limitations due to selective recall and often entirely fabricated recall. Feedback from other humans, among other things, create noise and uncertainty and errors in humans. Whatever ethical problems would result from overreliance on AI should be no different for overreliance on humans or the other tools and methods they use."Algorithm Aversion" applies to discussions about AI. We hold algorithms to a higher standard than human judgement so we erroneously prefer the latter even when they make more errors.
LikeLike
Celebrate
Support
Love
Insightful
Funny
31
-
- Report contribution
Thanks for letting us know! You'll no longer see this contribution
Bad data quality can mislead algorithms into incorrect predictions, while data bias from skewed sampling or erroneous labeling greatly affects behavioral prediction accuracy. This bias can yield unfair outcomes and distort human behavior representation. The nuanced positive, negative, or neutral responses, with their varying interpretations and context-dependence, challenge AI algorithms' categorization and prediction capabilities. Human behavior also evolves over time and is significantly influenced by context, making static AI models less effective without continuous updates. Ethical considerations become paramount, especially when AI predictions play a role in critical decision-making spheres like employment or criminal justice.
LikeLike
Celebrate
Support
Love
Insightful
Funny
16
Load more contributions
2 Model complexity and interpretability
Another challenge that AI algorithms face is the trade-off between model complexity and interpretability. Model complexity refers to the number and type of parameters, features, and layers that an AI algorithm uses to learn from the data and make predictions. Model interpretability refers to the ability to understand and explain how an AI algorithm works and why it produces certain results. Generally, more complex models can achieve higher accuracy and performance, but they also become less interpretable and more opaque, making it harder to trust, verify, and validate their predictions. Conversely, simpler models can be more interpretable and transparent, but they may also be less accurate and robust, missing important nuances and details of human behavior. To balance this trade-off, AI algorithms need to use appropriate levels of complexity and interpretability, depending on the purpose and context of the prediction, as well as employ methods and tools to enhance the explainability and accountability of their predictions.
Help others by sharing more (125 characters min.)
- Bronek Boszczyk Head of Spine Surgery Orthopaediatric Hospital Aschau; Visiting Professor Nottingham Trent Univ; Founder NSpine
- Report contribution
Thanks for letting us know! You'll no longer see this contribution
I worry tremendously for the coming generation of surgeons facing complex or uncommon problems - at this time we use our experience and expertise to perform the procedure most suitable to the given circ*mstance.Future surgeons may be directed by AI to conduct treatment according to the standard knowledge base which is not always the best solution in complex and u common conditions.Administrators and lawyers will of course love the standardization AI allows but there is a very real danger that surgical innovation and personalized care in uncommon conditions will be stifled especially with a new generation of surgeons who are not yet supported by an extensive personal case load that underpins their decisions.
LikeLike
Celebrate
Support
Love
Insightful
Funny
24
- Helen Wall LinkedIn [in]structor for Power BI, Excel, Python, R, AWS | Data Science Consultant
- Report contribution
Thanks for letting us know! You'll no longer see this contribution
It's really hard to predict human behavior with AI models because humans are so complex. We can often focus on one particular behavior or task (buying something online for example) to make reasonable predictions about what a human will do given certain factors and circ*mstances.AI models work best on machine data with minimal human adjustments (think logs for interactions on a website).This is because it's much easier to quantify these data points and isolate them. However, predicting what a human will say or do next is and will remain incredibly challenging.
LikeLike
Celebrate
Support
Love
Insightful
Funny
13
Load more contributions
3 Human factors and feedback
A third limitation that AI algorithms encounter is the influence of human factors and feedback on their predictions. Human factors refer to the psychological, social, and emotional aspects of human behavior, such as preferences, motivations, values, beliefs, emotions, and attitudes. Feedback refers to the interaction and communication between humans and AI systems, such as input, output, evaluation, and adaptation. Human factors and feedback can affect the validity and reliability of AI predictions, as they can introduce noise, uncertainty, variability, and change in the data and the model. For example, humans may have different or changing preferences or opinions over time, or they may react differently to the predictions or recommendations of AI systems, either accepting, rejecting, or modifying them. To address this limitation, AI algorithms need to consider and incorporate human factors and feedback in their predictions, as well as provide mechanisms and options for human control, involvement, and collaboration.
Help others by sharing more (125 characters min.)
Load more contributions
4 Ethical and social implications
A fourth and final limitation that AI algorithms face is the ethical and social implications of their predictions. Ethical and social implications refer to the potential consequences and impacts of AI predictions on human rights, values, norms, and well-being, such as privacy, security, fairness, accountability, transparency, and trust. AI predictions can have positive or negative effects on individuals, groups, and society, depending on how they are designed, implemented, and used. For instance, AI predictions can help improve health, education, and productivity, but they can also pose risks of harm, discrimination, manipulation, or exploitation. To overcome this limitation, AI algorithms need to follow ethical principles and guidelines, as well as involve and consult relevant stakeholders and experts, in order to ensure that their predictions are aligned with human values and interests, and respect human dignity and autonomy.
Help others by sharing more (125 characters min.)
- Haileleol Tibebu (PhD) Postdoctoral Fellow at university of Houston | AI Scientist | Responsible AI | AI Policy | AI Governance
- Report contribution
Thanks for letting us know! You'll no longer see this contribution
AI systems should be transparent in their operations, allowing for scrutiny and understanding of their decision-making processes. Moreover, there should be mechanisms for accountability, where AI systems and their creators are responsible for outcomes, particularly when these impact individuals or communities adversely. This approach necessitates a collaborative effort among technologists, ethicists, regulators, and the public, fostering a culture of continuous dialogue and assessment to ensure that AI serves as a tool for societal betterment rather than detriment.
LikeLike
Celebrate
Support
Love
Insightful
Funny
2
Load more contributions
5 Here’s what else to consider
This is a space to share examples, stories, or insights that don’t fit into any of the previous sections. What else would you like to add?
Help others by sharing more (125 characters min.)
-
- Report contribution
Thanks for letting us know! You'll no longer see this contribution
AI algorithms are only as good as the data they are fed. If the data used to train these algorithms is biased or unrepresentative, the predictions they make can be skewed. Bias can emerge from historical data that reflects societal prejudices and inequalities. For example, an AI algorithm trained on historical criminal data may unfairly predict that certain ethnic groups are more likely to engage in criminal activities due to biased arrest and sentencing records.
LikeLike
Celebrate
Support
Love
Insightful
Funny
7
- Shawn Tumanov AI Governance Director @ BMO | MBA, CISA, Model Risk
- Report contribution
Thanks for letting us know! You'll no longer see this contribution
AI is trained on past data and behavior. AI assumes that if something was one 1000 of time one way, it will continue to happen. Humans are not rationale, driven by different needs, wants and desires. Therefore, AI can only predict based on past history. AI cannot predict a behavior that has not occurred in the past. An example of this- if someone eats chicken every Tuesday for a year, AI will predict that the person will eat Chicken on Tuesday. However, the person may actually have a stomach (anomaly) issue and not eat at all that day. It would be difficult to predict such scenario.
LikeLike
Celebrate
Support
Love
Insightful
Funny
2
- Harish Saragadam Leading GenAI Products | 2X AI Top Voice | Driving Data Science Teams | IIT Delhi | Master in Crafting High-Performance Data Science Squads | Customer-Centric Innovator | Trusted Thought Leader | Angel Investor
- Report contribution
Thanks for letting us know! You'll no longer see this contribution
AI algorithms have limitations in predicting human behavior due to the complexity, unpredictability, privacy concerns, potential bias, cultural variability, context dependency, dynamic nature, individual variability, data limitations, external factors, subjective elements, and AI's limited understanding of human emotions. These challenges require careful and ethical consideration when using AI for behavior prediction.
LikeLike
Celebrate
Support
Love
Insightful
Funny
19
-
- Report contribution
Thanks for letting us know! You'll no longer see this contribution
AI algorithms would need to deal with the inherent complexity and unpredictability of human nature. And this is the real issue here. Firstly, human behavior is influenced by a myriad of factors, many of which may not be present in the datasets AI uses. Secondly, human decisions are often influenced by emotions, personal experiences, cultural nuances, and irrationalities, which are challenging to quantify and model. Ethical concerns arise when data privacy is breached or when predictions inadvertently perpetuate societal biases. Moreover, the dynamic evolution of human societies and values can render some AI models outdated, necessitating continuous updates and recalibrations. A great challenge though!
LikeLike
Celebrate
Support
Love
Insightful
Funny
12
Load more contributions
Artificial Intelligence
Artificial Intelligence
+ Follow
Rate this article
We created this article with the help of AI. What do you think of it?
It’s great It’s not so great
Thanks for your feedback
Your feedback is private. Like or react to bring the conversation to your network.
Tell us more
Tell us why you didn’t like this article.
If you think something in this article goes against our Professional Community Policies, please let us know.
We appreciate you letting us know. Though we’re unable to respond directly, your feedback helps us improve this experience for everyone.
If you think this goes against our Professional Community Policies, please let us know.
More articles on Artificial Intelligence
No more previous content
- Here's how you can maintain high productivity at work while prioritizing self-care as an AI professional. 5 contributions
- What do you do if your age is holding you back in the AI industry? 3 contributions
- What do you do if you want to connect with potential employers as an AI professional?
- Here's how you can prevent burnout and effectively manage your workload as an AI professional.
- What do you do if potential clients or customers are hesitant to trust AI professionals? 2 contributions
No more next content
Explore Other Skills
- Web Development
- Programming
- Agile Methodologies
- Machine Learning
- Software Development
- Computer Science
- Data Engineering
- Data Analytics
- Data Science
- Cloud Computing
More relevant reading
- Artificial Intelligence What is the best way to train an AI algorithm to recognize patterns?
- Artificial Intelligence What are the best ways to visualize AI prediction uncertainty?
- Artificial Intelligence How can you become an expert in explainable AI?
- Artificial Intelligence How can you identify open problems in explainable AI?
Help improve contributions
Mark contributions as unhelpful if you find them irrelevant or not valuable to the article. This feedback is private to you and won’t be shared publicly.
Contribution hidden for you
This feedback is never shared publicly, we’ll use it to show better contributions to everyone.