mirror of
https://github.com/kamranahmedse/developer-roadmap.git
synced 2025-08-06 17:26:29 +02:00
fix: content typo
Typo. "xxplainability" should be "explainability".
This commit is contained in:
@@ -34,7 +34,7 @@ LLMs present several risks in the domain of question answering.
|
|||||||
- Harmful answers
|
- Harmful answers
|
||||||
- Lack of contextual understanding
|
- Lack of contextual understanding
|
||||||
- Privacy and security concerns
|
- Privacy and security concerns
|
||||||
- Lack of transparency and xxplainability
|
- Lack of transparency and explainability
|
||||||
|
|
||||||
### Text summarization
|
### Text summarization
|
||||||
|
|
||||||
@@ -73,4 +73,4 @@ Learn more from the following resources:
|
|||||||
- [@article@Limitations of LLMs: Bias, Hallucinations, and More](https://learnprompting.org/docs/basics/pitfalls)
|
- [@article@Limitations of LLMs: Bias, Hallucinations, and More](https://learnprompting.org/docs/basics/pitfalls)
|
||||||
- [@guides@Risks & Misuses | Prompt Engineering Guide](https://www.promptingguide.ai/risks)
|
- [@guides@Risks & Misuses | Prompt Engineering Guide](https://www.promptingguide.ai/risks)
|
||||||
- [@guides@OWASP Top 10 for LLM & Generative AI Security](https://genai.owasp.org/llm-top-10/)
|
- [@guides@OWASP Top 10 for LLM & Generative AI Security](https://genai.owasp.org/llm-top-10/)
|
||||||
- [@guides@LLM Security Guide - Understanding the Risks of Prompt Injections and Other Attacks on Large Language Models ](https://www.mlopsaudits.com/blog/llm-security-guide-understanding-the-risks-of-prompt-injections-and-other-attacks-on-large-language-models)
|
- [@guides@LLM Security Guide - Understanding the Risks of Prompt Injections and Other Attacks on Large Language Models ](https://www.mlopsaudits.com/blog/llm-security-guide-understanding-the-risks-of-prompt-injections-and-other-attacks-on-large-language-models)
|
||||||
|
Reference in New Issue
Block a user