mirror of
https://github.com/kamranahmedse/developer-roadmap.git
synced 2025-07-31 22:40:19 +02:00
Improved Content in Prompt Hacking (#7308)
* Update index.md * Update src/data/roadmaps/prompt-engineering/content/107-prompt-hacking/index.md --------- Co-authored-by: Kamran Ahmed <kamranahmed.se@gmail.com>
This commit is contained in:
@@ -1,4 +1,8 @@
|
|||||||
# Prompt Hacking
|
# Prompt Hacking
|
||||||
|
|
||||||
|
Prompt hacking refers to techniques used to manipulate or exploit AI language models by carefully crafting input prompts. This practice aims to bypass the model's intended constraints or elicit unintended responses. Common methods include injection attacks, where malicious instructions are embedded within seemingly innocent prompts, and prompt leaking, which attempts to extract sensitive information from the model's training data.
|
||||||
|
|
||||||
|
Visit the following resources to learn more:
|
||||||
|
|
||||||
- [@article@Prompt Hacking](https://learnprompting.org/docs/prompt_hacking/intro)
|
- [@article@Prompt Hacking](https://learnprompting.org/docs/prompt_hacking/intro)
|
||||||
- [@feed@Explore top posts about Security](https://app.daily.dev/tags/security?ref=roadmapsh)
|
- [@feed@Explore top posts about Security](https://app.daily.dev/tags/security?ref=roadmapsh)
|
||||||
|
Reference in New Issue
Block a user