diff --git a/.gitignore b/.gitignore index a0625636..08d00376 100644 --- a/.gitignore +++ b/.gitignore @@ -9,6 +9,7 @@ *.user *.userosscache *.sln.docstates +venv # User-specific files (MonoDevelop/Xamarin Studio) *.userprefs diff --git a/09-chat-project/README.md b/09-chat-project/README.md new file mode 100644 index 00000000..c5bc6c66 --- /dev/null +++ b/09-chat-project/README.md @@ -0,0 +1,376 @@ +# Chat project + +This chat project shows how to build a Chat Assistant using GitHub Models. + +Here's what the finished project looks like: + +
+ Chat app +
+ +Some context, building Chat assistants using generative AI is a great way to start learning about AI. What you'll learn is to integrate generative AI into a web app throughout this lesson, let's begin. + +## Connecting to generative AI + +For the backend, we're using GitHub Models. It's a great service that enables you to use AI for free. Go to its playground and grab code that corresponds to your chosen backend language. Here's what it looks like at [GitHub Models Playground](https://github.com/marketplace/models/azure-openai/gpt-4o-mini/playground) + +
+ GitHub Models AI Playground +
+ + +As we said, select the "Code" tab and your chosen runtime. + +
+ playground choice +
+ +In this case we select Python, which will mean we pick this code: + +```python +"""Run this model in Python + +> pip install openai +""" +import os +from openai import OpenAI + +# To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings. +# Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens +client = OpenAI( + base_url="https://models.github.ai/inference", + api_key=os.environ["GITHUB_TOKEN"], +) + +response = client.chat.completions.create( + messages=[ + { + "role": "system", + "content": "", + }, + { + "role": "user", + "content": "What is the capital of France?", + } + ], + model="openai/gpt-4o-mini", + temperature=1, + max_tokens=4096, + top_p=1 +) + +print(response.choices[0].message.content) +``` + +Let's clean up this code a bit so it's reusable: + +```python +def call_llm(prompt: str, system_message: str): + response = client.chat.completions.create( + messages=[ + { + "role": "system", + "content": system_message, + }, + { + "role": "user", + "content": prompt, + } + ], + model="openai/gpt-4o-mini", + temperature=1, + max_tokens=4096, + top_p=1 + ) + + return response.choices[0].message.content +``` + +With this function `call_llm` we can now take a prompt and a system prompt and the function ends up returning the result. + +### Customize AI Assistant + +If you want to customize the AI assistant you can specify how you want it to behave by populating the system prompt like so: + +```python +call_llm("Tell me about you", "You're Albert Einstein, you only know of things in the time you were alive") +``` + +## Expose it via a Web API + +Great, we have an AI part done, let's see how we can integrate that into a Web API. For the Web API, we're choosing to use Flask, but any web framework should be good. Let's see the code for it: + +```python +# api.py +from flask import Flask, request, jsonify +from llm import call_llm +from flask_cors import CORS + +app = Flask(__name__) +CORS(app) # * example.com + +@app.route("/", methods=["GET"]) +def index(): + return "Welcome to this API. Call POST /hello with 'message': 'my message' as JSON payload" + + +@app.route("/hello", methods=["POST"]) +def hello(): + # get message from request body { "message": "do this taks for me" } + data = request.get_json() + message = data.get("message", "") + + response = call_llm(message, "You are a helpful assistant.") + return jsonify({ + "response": response + }) + +if __name__ == "__main__": + app.run(host="0.0.0.0", port=5000) +``` + +Here, we create a flask API and define a default route "/" and "/chat". The latter is meant to be used by our frontend to pass questions to it. + +To integrate *llm.py* here's what we need to do: + +- Import the `call_llm` function: + + ```python + from llm import call_llm + from flask import Flask, request + ``` + +- Call it from the "/chat" route: + + ```python + @app.route("/hello", methods=["POST"]) + def hello(): + # get message from request body { "message": "do this taks for me" } + data = request.get_json() + message = data.get("message", "") + + response = call_llm(message, "You are a helpful assistant.") + return jsonify({ + "response": response + }) + ``` + + Here we parse the incoming request to retrieve the `message` property from the JSON body. Thereafter we call the LLM with this call: + + ```python + response = call_llm(message, "You are a helpful assistant") + + # return the response as JSON + return jsonify({ + "response": response + }) + ``` + +Great, now we have done what we need. + +### Configure Cors + +We should call out that we set up something like CORS, cross-origin resource sharing. This means that because our backend and frontend will ron on different ports, we need to allow the frontend to call into the backend. There's a piece of code in *api.py* that sets this up: + +```python +from flask_cors import CORS + +app = Flask(__name__) +CORS(app) # * example.com +``` + +Right now it's been set up to allow "*" which is all origins and that's a bit unsafe, we should restrict it once we go to production. + +## Run your project + +Ok, so we have *llm.py* and *api.py*, how can we make this work with a backend? Well, there's two things we need to do: + +- Install dependencies: + + ```sh + cd backend + python -m venv venv + source ./venv/bin/activate + + pip install openai flask flask-cors openai + ``` + +- Start the API + + ```sh + python api.py + ``` + + If you're in Codespaces you need to go to Ports in the bottom part of the editor, right-click over it and click Port Visibility" and select "Public". + +### Work on a frontend + +Now that we have an API up and running, let's create a frontend for this. A bare minimum frontend that we will improve stepwise. In a *frontend* folder, create the following: + +```text +backend/ +frontend/ +index.html +app.js +styles.css +``` + +Let's start with **index.html**: + +```html + + + + + +
+ + + +
+ + + \ No newline at end of file diff --git a/09-chat-project/solution/frontend/styles.css b/09-chat-project/solution/frontend/styles.css new file mode 100644 index 00000000..1b5cdb55 --- /dev/null +++ b/09-chat-project/solution/frontend/styles.css @@ -0,0 +1,155 @@ +/* Dark, modern chat styles for the AI chat page */ +:root{ + --bg-1: #0f1724; + --bg-2: #071226; + --panel: rgba(255,255,255,0.03); + --glass: rgba(255,255,255,0.04); + --accent: #7c3aed; /* purple */ + --accent-2: #06b6d4; /* cyan */ + --muted: rgba(255,255,255,0.55); + --user-bg: linear-gradient(135deg,#0ea5a4 0%, #06b6d4 100%); + --ai-bg: linear-gradient(135deg,#111827 0%, #0b1220 100%); + --radius: 14px; + --max-width: 900px; +} + +*{box-sizing:border-box} +html,body{height:100%} +body{ + margin:0; + font-family: 'Inter', system-ui, -apple-system, 'Segoe UI', Roboto, 'Helvetica Neue', Arial; + background: radial-gradient(1000px 500px at 10% 10%, rgba(124,58,237,0.12), transparent), + radial-gradient(800px 400px at 90% 90%, rgba(6,182,212,0.06), transparent), + linear-gradient(180deg,var(--bg-1), var(--bg-2)); + color: #e6eef8; + -webkit-font-smoothing:antialiased; + -moz-osx-font-smoothing:grayscale; + padding:32px; +} + +.app{ + max-width:var(--max-width); + margin:0 auto; + height:calc(100vh - 64px); + display:flex; + flex-direction:column; + gap:16px; +} + +.header{ + display:flex; + align-items:center; + gap:16px; + padding:16px 20px; + border-radius:12px; + background: linear-gradient(180deg, rgba(255,255,255,0.02), rgba(255,255,255,0.01)); + box-shadow: 0 6px 18px rgba(2,6,23,0.6); + backdrop-filter: blur(6px); +} +.header .logo{ + font-size:28px; + width:56px;height:56px; + display:flex;align-items:center;justify-content:center; + border-radius:12px; + background: linear-gradient(135deg, rgba(255,255,255,0.03), rgba(255,255,255,0.01)); +} +.header h1{margin:0;font-size:18px} +.header .subtitle{margin:0;font-size:12px;color:var(--muted)} + +.chat{ + background: linear-gradient(180deg, rgba(255,255,255,0.02), rgba(255,255,255,0.01)); + padding:18px; + border-radius:16px; + flex:1 1 auto; + display:flex; + flex-direction:column; + overflow:hidden; + box-shadow: 0 20px 40px rgba(2,6,23,0.6); +} + +.messages{ + overflow:auto; + padding:8px; + display:flex; + flex-direction:column; + gap:12px; + scrollbar-width: thin; +} + +/* Message bubble */ +.message{ + max-width:85%; + display:inline-block; + padding:12px 14px; + border-radius:12px; + color: #e6eef8; + line-height:1.4; + box-shadow: 0 6px 18px rgba(2,6,23,0.45); +} +.message.user{ + margin-left:auto; + background: var(--user-bg); + border-radius: 16px 16px 6px 16px; + text-align:left; +} +.message.ai{ + margin-right:auto; + background: linear-gradient(135deg, rgba(255,255,255,0.02), rgba(255,255,255,0.01)); + border: 1px solid rgba(255,255,255,0.03); + color: #cfe6ff; + border-radius: 16px 16px 16px 6px; +} +.message small{display:block;color:var(--muted);font-size:11px;margin-top:6px} + +/* Typing indicator (dots) */ +.typing{ + display:inline-flex;gap:6px;align-items:center;padding:8px 12px;border-radius:10px;background:rgba(255,255,255,0.02) +} +.typing .dot{width:8px;height:8px;border-radius:50%;background:var(--muted);opacity:0.9} +@keyframes blink{0%{transform:translateY(0);opacity:0.25}50%{transform:translateY(-4px);opacity:1}100%{transform:translateY(0);opacity:0.25}} +.typing .dot:nth-child(1){animation:blink 1s infinite 0s} +.typing .dot:nth-child(2){animation:blink 1s infinite 0.15s} +.typing .dot:nth-child(3){animation:blink 1s infinite 0.3s} + +/* Composer */ +.composer{ + display:flex; + gap:12px; + align-items:center; + padding-top:12px; + border-top:1px dashed rgba(255,255,255,0.02); +} +.composer textarea{ + resize:none; + min-height:44px; + max-height:160px; + padding:12px 14px; + border-radius:12px; + border: none; + outline: none; + background: rgba(255,255,255,0.02); + color: #e6eef8; + flex:1 1 auto; + font-size:14px; +} +.composer button{ + background: linear-gradient(135deg,var(--accent),var(--accent-2)); + color:white; + border:none; + padding:12px 16px; + border-radius:12px; + cursor:pointer; + font-weight:600; + box-shadow: 0 8px 24px rgba(12,6,40,0.5); + transition: transform .12s ease, box-shadow .12s ease; +} +.composer button:active{transform:translateY(1px)} + +.footer{color:var(--muted);font-size:12px;text-align:center} + +/* small screens */ +@media (max-width:640px){ + body{padding:16px} + .app{height:calc(100vh - 32px)} + .header h1{font-size:16px} +} \ No newline at end of file diff --git a/README.md b/README.md index 3e90988f..7cdbe44d 100644 --- a/README.md +++ b/README.md @@ -27,6 +27,10 @@ Learn the fundamentals of web development with our 12-week comprehensive course Visit [**Student Hub page**](https://docs.microsoft.com/learn/student-hub/?WT.mc_id=academic-77807-sagibbon) where you will find beginner resources, Student packs and even ways to get a free certificate voucher. This is the page you want to bookmark and check from time to time as we switch out content monthly. +### 📣 Announcement - _New Project to build using Generative AI_ + +New AI Assistant project just added, check it out [project](./09-chat-project/README.md) + ### 📣 Announcement - _New Curriculum_ on Generative AI for JavaScript was just released Don't miss our new Generative AI curriculum! @@ -146,6 +150,7 @@ Our recommendation is to use [Visual Studio Code](https://code.visualstudio.com/ | 22 | [Banking App](./7-bank-project/solution/README.md) | Build a Login and Registration Form | Learn about building forms and handling validation routines | [Forms](./7-bank-project/2-forms/README.md) | Yohan | | 23 | [Banking App](./7-bank-project/solution/README.md) | Methods of Fetching and Using Data | How data flows in and out of your app, how to fetch it, store it, and dispose of it | [Data](./7-bank-project/3-data/README.md) | Yohan | | 24 | [Banking App](./7-bank-project/solution/README.md) | Concepts of State Management | Learn how your app retains state and how to manage it programmatically | [State Management](./7-bank-project/4-state-management/README.md) | Yohan | +| 25 | [AI Assistant project](./09-chat-project/README.md) | Working with AI | Learn how to build your own AI assistant | | Chris | ## 🏫 Pedagogy