Merge pull request #1491 from microsoft/softchris-patch-4

docs: Project for building an AI asistant
This commit is contained in:
chris
2025-08-28 15:19:10 +01:00
committed by GitHub
14 changed files with 766 additions and 0 deletions

1
.gitignore vendored
View File

@@ -9,6 +9,7 @@
*.user
*.userosscache
*.sln.docstates
venv
# User-specific files (MonoDevelop/Xamarin Studio)
*.userprefs

376
09-chat-project/README.md Normal file
View File

@@ -0,0 +1,376 @@
# Chat project
This chat project shows how to build a Chat Assistant using GitHub Models.
Here's what the finished project looks like:
<div>
<img src="./assets/screenshot.png" alt="Chat app" width="600">
</div>
Some context, building Chat assistants using generative AI is a great way to start learning about AI. What you'll learn is to integrate generative AI into a web app throughout this lesson, let's begin.
## Connecting to generative AI
For the backend, we're using GitHub Models. It's a great service that enables you to use AI for free. Go to its playground and grab code that corresponds to your chosen backend language. Here's what it looks like at [GitHub Models Playground](https://github.com/marketplace/models/azure-openai/gpt-4o-mini/playground)
<div>
<img src="./assets/playground.png" alt="GitHub Models AI Playground" with="600">
</div>
As we said, select the "Code" tab and your chosen runtime.
<div>
<img src="./assets/playground-choice.png" alt="playground choice" with="600">
</div>
In this case we select Python, which will mean we pick this code:
```python
"""Run this model in Python
> pip install openai
"""
import os
from openai import OpenAI
# To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings.
# Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens
client = OpenAI(
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
)
response = client.chat.completions.create(
messages=[
{
"role": "system",
"content": "",
},
{
"role": "user",
"content": "What is the capital of France?",
}
],
model="openai/gpt-4o-mini",
temperature=1,
max_tokens=4096,
top_p=1
)
print(response.choices[0].message.content)
```
Let's clean up this code a bit so it's reusable:
```python
def call_llm(prompt: str, system_message: str):
response = client.chat.completions.create(
messages=[
{
"role": "system",
"content": system_message,
},
{
"role": "user",
"content": prompt,
}
],
model="openai/gpt-4o-mini",
temperature=1,
max_tokens=4096,
top_p=1
)
return response.choices[0].message.content
```
With this function `call_llm` we can now take a prompt and a system prompt and the function ends up returning the result.
### Customize AI Assistant
If you want to customize the AI assistant you can specify how you want it to behave by populating the system prompt like so:
```python
call_llm("Tell me about you", "You're Albert Einstein, you only know of things in the time you were alive")
```
## Expose it via a Web API
Great, we have an AI part done, let's see how we can integrate that into a Web API. For the Web API, we're choosing to use Flask, but any web framework should be good. Let's see the code for it:
```python
# api.py
from flask import Flask, request, jsonify
from llm import call_llm
from flask_cors import CORS
app = Flask(__name__)
CORS(app) # * example.com
@app.route("/", methods=["GET"])
def index():
return "Welcome to this API. Call POST /hello with 'message': 'my message' as JSON payload"
@app.route("/hello", methods=["POST"])
def hello():
# get message from request body { "message": "do this taks for me" }
data = request.get_json()
message = data.get("message", "")
response = call_llm(message, "You are a helpful assistant.")
return jsonify({
"response": response
})
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
```
Here, we create a flask API and define a default route "/" and "/chat". The latter is meant to be used by our frontend to pass questions to it.
To integrate *llm.py* here's what we need to do:
- Import the `call_llm` function:
```python
from llm import call_llm
from flask import Flask, request
```
- Call it from the "/chat" route:
```python
@app.route("/hello", methods=["POST"])
def hello():
# get message from request body { "message": "do this taks for me" }
data = request.get_json()
message = data.get("message", "")
response = call_llm(message, "You are a helpful assistant.")
return jsonify({
"response": response
})
```
Here we parse the incoming request to retrieve the `message` property from the JSON body. Thereafter we call the LLM with this call:
```python
response = call_llm(message, "You are a helpful assistant")
# return the response as JSON
return jsonify({
"response": response
})
```
Great, now we have done what we need.
### Configure Cors
We should call out that we set up something like CORS, cross-origin resource sharing. This means that because our backend and frontend will ron on different ports, we need to allow the frontend to call into the backend. There's a piece of code in *api.py* that sets this up:
```python
from flask_cors import CORS
app = Flask(__name__)
CORS(app) # * example.com
```
Right now it's been set up to allow "*" which is all origins and that's a bit unsafe, we should restrict it once we go to production.
## Run your project
Ok, so we have *llm.py* and *api.py*, how can we make this work with a backend? Well, there's two things we need to do:
- Install dependencies:
```sh
cd backend
python -m venv venv
source ./venv/bin/activate
pip install openai flask flask-cors openai
```
- Start the API
```sh
python api.py
```
If you're in Codespaces you need to go to Ports in the bottom part of the editor, right-click over it and click Port Visibility" and select "Public".
### Work on a frontend
Now that we have an API up and running, let's create a frontend for this. A bare minimum frontend that we will improve stepwise. In a *frontend* folder, create the following:
```text
backend/
frontend/
index.html
app.js
styles.css
```
Let's start with **index.html**:
```html
<html>
<head>
<link rel="stylesheet" href="styles.css">
</head>
<body>
<form>
<textarea id="messages"></textarea>
<input id="input" type="text" />
<button type="submit" id="sendBtn">Send</button>
</form>
<script src="app.js" />
</body>
</html>
```
This above is the absolute minimum you need to support a chat window, as it consists of a textarea where messages will be rendered, an input for where to type the message and a button for sending your message to the backend. Let's look at the JavaScript next in *app.js*
**app.js**
```js
// app.js
(function(){
// 1. set up elements
const messages = document.getElementById("messages");
const form = document.getElementById("form");
const input = document.getElementById("input");
const BASE_URL = "change this";
const API_ENDPOINT = `${BASE_URL}/hello`;
// 2. create a function that talks to our backend
async function callApi(text) {
const response = await fetch(API_ENDPOINT, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ message: text })
});
let json = await response.json();
return json.response;
}
// 3. add response to our textarea
function appendMessage(text, role) {
const el = document.createElement("div");
el.className = `message ${role}`;
el.innerHTML = text;
messages.appendChild(el);
}
// 4. listen to submit events
form.addEventListener("submit", async(e) => {
e.preventDefault();
// someone clicked the button in the form
// get input
const text = input.value.trim();
appendMessage(text, "user")
// reset it
input.value = '';
const reply = await callApi(text);
// add to messages
appendMessage(reply, "assistant");
})
})();
```
Let's go through the code per section:
- 1) Here we get a reference to all our elements we will refer to later in the code
- 2) In this section, we create a function that uses the built-in `fetch` method that calls our backend
- 3) `appendMessage` helps add responses as well as what you as a user type.
- 4) Here we listen to the submit event and we end up reading the input field, place the user's message in the text area, call the API, render that respond in the text area.
Let's look at styling next, here's where you can go really crazy and make it look like you want, but here's some suggestions:
**styles.css**
```
.message {
background: #222;
box-shadow: 0 0 0 10px orange;
padding: 10px:
margin: 5px;
}
.message.user {
background: blue;
}
.message.assistant {
background: grey;
}
```
With these three classes, you will style messages different depending on where they come from an assistant or you as a user. If you want to be inspired, check out the `solution/frontend/styles.css` folder.
### Change Base Url
There was one thing here we didn't set and that was `BASE_URL`, this is not known until your backend is started. To set it:
- If you run API locally, it should be set to something like `http://localhost:5000`.
- If run in a Codespaces, it should look something like "[name]app.github.dev".
## Assignment
Create your own folder *project* with content like so:
```text
project/
frontend/
index.html
app.js
styles.css
backend/
api.py
llm.py
```
Copy the content from what was instructed from above but feel free to customize to your liking
## Solution
[Solution](./solution/README.md)
## Bonus
Try changing the personality of the AI assistant. When you call `call_llm` in *api.py* you can change the second argument to what you want, for example:
```python
call_llm(message, "You are Captain Picard")
```
Change also the CSS and text to your liking, so do changes in *index.html* and *styles.css*.
## Summary
Great, you've learned from scratch how to create a personal assistant using AI. We've done so using GitHub Models, a backend in Python and a frontend in HTML, CSS and JavaScript
## Set up with Codespaces
- Navigate to: [Web Dev For Beginners repo](https://github.com/microsoft/Web-Dev-For-Beginners)
- Create from a template (make sure you're logged in to GitHub) in top-right corner:
![Create from template](./assets/template.png)
- Once in your repo, create a Codespace:
![Create codespace](./assets/codespace.png)
This should start an environment you can now work with.

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 281 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

View File

@@ -0,0 +1,41 @@
# Run code
## Set up
Create virtual environment
```sh
cd backend
python -m venv venv
source ./venv/bin/activate
```
## Install dependencies
```sh
pip install openai flask flask-cors
```
## Run API
```sh
python api.py
```
## Run frontend
Make sure you stand in the frontend folder
Locate *app.js*, change `BASE_URL` to that of your backend URL
Run it
```
npx http-server -p 8000
```
Try typing a message in the chat, you should see a response (providing you're running this in a Codespace or have set up a access token).
## Set up access token (if you don't run this in a Codespace)
See [Set up PAT](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens)

View File

@@ -0,0 +1,30 @@
# api
from flask import Flask, request, jsonify
from llm import call_llm
from flask_cors import CORS
app = Flask(__name__)
CORS(app) # * example.com
@app.route("/", methods=["GET"])
def index():
return "Welcome to this lesson"
@app.route("/test", methods=["GET"])
def test():
return "Test"
@app.route("/hello", methods=["POST"])
def hello():
# get message from request body { "message": "do this taks for me" }
data = request.get_json()
message = data.get("message", "")
response = call_llm(message, "You are a helpful assistant.")
return jsonify({
"response": response
})
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)

View File

@@ -0,0 +1,29 @@
import os
from openai import OpenAI
# To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings.
# Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens
client = OpenAI(
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
)
def call_llm(prompt: str, system_message: str):
response = client.chat.completions.create(
messages=[
{
"role": "system",
"content": system_message,
},
{
"role": "user",
"content": prompt,
}
],
model="openai/gpt-4o-mini",
temperature=1,
max_tokens=4096,
top_p=1
)
return response.choices[0].message.content

View File

@@ -0,0 +1,94 @@
// Replace placeholder JS with chat UI client logic
// Handles sending messages to backend and updating the UI
(function(){
const messagesEl = document.getElementById('messages');
const form = document.getElementById('composer');
const input = document.getElementById('input');
const sendBtn = document.getElementById('send');
const BASE_URL = "https://automatic-space-funicular-954qxp96rgcqjq-5000.app.github.dev/";
const API_ENDPOINT = `${BASE_URL}/hello`; // adjust if your backend runs elsewhere
function escapeHtml(str){
if(!str) return '';
return str.replace(/&/g,'&amp;')
.replace(/</g,'&lt;')
.replace(/>/g,'&gt;')
.replace(/"/g,'&quot;')
.replace(/'/g,'&#039;');
}
function formatText(text){
return escapeHtml(text).replace(/\n/g,'<br>');
}
function scrollToBottom(){
messagesEl.scrollTop = messagesEl.scrollHeight;
}
function appendMessage(role, text){
const el = document.createElement('div');
el.className = 'message ' + role;
el.innerHTML = `<div class="content">${formatText(text)}</div><small>${new Date().toLocaleTimeString()}</small>`;
messagesEl.appendChild(el);
scrollToBottom();
return el;
}
function createTyping(){
const el = document.createElement('div');
el.className = 'message ai';
const typing = document.createElement('div');
typing.className = 'typing';
for(let i=0;i<3;i++){ const d = document.createElement('span'); d.className = 'dot'; typing.appendChild(d); }
el.appendChild(typing);
messagesEl.appendChild(el);
scrollToBottom();
return el;
}
async function sendToApi(text){
const res = await fetch(API_ENDPOINT, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message: text })
});
if(!res.ok) throw new Error('Network response was not ok');
let json = await res.json();
return json.response;
}
form.addEventListener('submit', async (e) => {
e.preventDefault();
const text = input.value.trim();
if(!text) return;
appendMessage('user', text);
input.value = '';
input.focus();
sendBtn.disabled = true;
const typingEl = createTyping();
try{
const reply = await sendToApi(text);
typingEl.remove();
appendMessage('ai', reply || '(no response)');
}catch(err){
typingEl.remove();
appendMessage('ai', 'Error: ' + err.message);
console.error(err);
}finally{
sendBtn.disabled = false;
}
});
// Enter to send, Shift+Enter for newline
input.addEventListener('keydown', (e) => {
if(e.key === 'Enter' && !e.shiftKey){
e.preventDefault();
form.dispatchEvent(new Event('submit', { cancelable: true }));
}
});
// Small welcome message
appendMessage('ai', 'Hello! I\'m your AI assistant. Ask me anything.');
})();

View File

@@ -0,0 +1,35 @@
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Stellar AI Chat</title>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@300;400;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="styles.css">
</head>
<body>
<div class="app">
<header class="header">
<div style="display:flex;align-items:center;gap:12px;">
<div class="logo">🤖</div>
<div>
<h1>My company</h1>
<p class="subtitle">Dark-mode chat UI — powered by the backend AI</p>
</div>
</div>
</header>
<main class="chat" id="chat">
<div class="messages" id="messages" aria-live="polite"></div>
<form id="composer" class="composer" action="#">
<textarea id="input" placeholder="Say hello — press Enter to send (Shift+Enter for newline)" rows="2"></textarea>
<button id="send" type="submit">Send</button>
</form>
</main>
<footer class="footer">Running with backend at <code>http://127.0.0.1:5000/hello</code></footer>
</div>
<script src="app.js"></script>
</body>
</html>

View File

@@ -0,0 +1,155 @@
/* Dark, modern chat styles for the AI chat page */
:root{
--bg-1: #0f1724;
--bg-2: #071226;
--panel: rgba(255,255,255,0.03);
--glass: rgba(255,255,255,0.04);
--accent: #7c3aed; /* purple */
--accent-2: #06b6d4; /* cyan */
--muted: rgba(255,255,255,0.55);
--user-bg: linear-gradient(135deg,#0ea5a4 0%, #06b6d4 100%);
--ai-bg: linear-gradient(135deg,#111827 0%, #0b1220 100%);
--radius: 14px;
--max-width: 900px;
}
*{box-sizing:border-box}
html,body{height:100%}
body{
margin:0;
font-family: 'Inter', system-ui, -apple-system, 'Segoe UI', Roboto, 'Helvetica Neue', Arial;
background: radial-gradient(1000px 500px at 10% 10%, rgba(124,58,237,0.12), transparent),
radial-gradient(800px 400px at 90% 90%, rgba(6,182,212,0.06), transparent),
linear-gradient(180deg,var(--bg-1), var(--bg-2));
color: #e6eef8;
-webkit-font-smoothing:antialiased;
-moz-osx-font-smoothing:grayscale;
padding:32px;
}
.app{
max-width:var(--max-width);
margin:0 auto;
height:calc(100vh - 64px);
display:flex;
flex-direction:column;
gap:16px;
}
.header{
display:flex;
align-items:center;
gap:16px;
padding:16px 20px;
border-radius:12px;
background: linear-gradient(180deg, rgba(255,255,255,0.02), rgba(255,255,255,0.01));
box-shadow: 0 6px 18px rgba(2,6,23,0.6);
backdrop-filter: blur(6px);
}
.header .logo{
font-size:28px;
width:56px;height:56px;
display:flex;align-items:center;justify-content:center;
border-radius:12px;
background: linear-gradient(135deg, rgba(255,255,255,0.03), rgba(255,255,255,0.01));
}
.header h1{margin:0;font-size:18px}
.header .subtitle{margin:0;font-size:12px;color:var(--muted)}
.chat{
background: linear-gradient(180deg, rgba(255,255,255,0.02), rgba(255,255,255,0.01));
padding:18px;
border-radius:16px;
flex:1 1 auto;
display:flex;
flex-direction:column;
overflow:hidden;
box-shadow: 0 20px 40px rgba(2,6,23,0.6);
}
.messages{
overflow:auto;
padding:8px;
display:flex;
flex-direction:column;
gap:12px;
scrollbar-width: thin;
}
/* Message bubble */
.message{
max-width:85%;
display:inline-block;
padding:12px 14px;
border-radius:12px;
color: #e6eef8;
line-height:1.4;
box-shadow: 0 6px 18px rgba(2,6,23,0.45);
}
.message.user{
margin-left:auto;
background: var(--user-bg);
border-radius: 16px 16px 6px 16px;
text-align:left;
}
.message.ai{
margin-right:auto;
background: linear-gradient(135deg, rgba(255,255,255,0.02), rgba(255,255,255,0.01));
border: 1px solid rgba(255,255,255,0.03);
color: #cfe6ff;
border-radius: 16px 16px 16px 6px;
}
.message small{display:block;color:var(--muted);font-size:11px;margin-top:6px}
/* Typing indicator (dots) */
.typing{
display:inline-flex;gap:6px;align-items:center;padding:8px 12px;border-radius:10px;background:rgba(255,255,255,0.02)
}
.typing .dot{width:8px;height:8px;border-radius:50%;background:var(--muted);opacity:0.9}
@keyframes blink{0%{transform:translateY(0);opacity:0.25}50%{transform:translateY(-4px);opacity:1}100%{transform:translateY(0);opacity:0.25}}
.typing .dot:nth-child(1){animation:blink 1s infinite 0s}
.typing .dot:nth-child(2){animation:blink 1s infinite 0.15s}
.typing .dot:nth-child(3){animation:blink 1s infinite 0.3s}
/* Composer */
.composer{
display:flex;
gap:12px;
align-items:center;
padding-top:12px;
border-top:1px dashed rgba(255,255,255,0.02);
}
.composer textarea{
resize:none;
min-height:44px;
max-height:160px;
padding:12px 14px;
border-radius:12px;
border: none;
outline: none;
background: rgba(255,255,255,0.02);
color: #e6eef8;
flex:1 1 auto;
font-size:14px;
}
.composer button{
background: linear-gradient(135deg,var(--accent),var(--accent-2));
color:white;
border:none;
padding:12px 16px;
border-radius:12px;
cursor:pointer;
font-weight:600;
box-shadow: 0 8px 24px rgba(12,6,40,0.5);
transition: transform .12s ease, box-shadow .12s ease;
}
.composer button:active{transform:translateY(1px)}
.footer{color:var(--muted);font-size:12px;text-align:center}
/* small screens */
@media (max-width:640px){
body{padding:16px}
.app{height:calc(100vh - 32px)}
.header h1{font-size:16px}
}

View File

@@ -27,6 +27,10 @@ Learn the fundamentals of web development with our 12-week comprehensive course
Visit [**Student Hub page**](https://docs.microsoft.com/learn/student-hub/?WT.mc_id=academic-77807-sagibbon) where you will find beginner resources, Student packs and even ways to get a free certificate voucher. This is the page you want to bookmark and check from time to time as we switch out content monthly.
### 📣 Announcement - _New Project to build using Generative AI_
New AI Assistant project just added, check it out [project](./09-chat-project/README.md)
### 📣 Announcement - _New Curriculum_ on Generative AI for JavaScript was just released
Don't miss our new Generative AI curriculum!
@@ -146,6 +150,7 @@ Our recommendation is to use [Visual Studio Code](https://code.visualstudio.com/
| 22 | [Banking App](./7-bank-project/solution/README.md) | Build a Login and Registration Form | Learn about building forms and handling validation routines | [Forms](./7-bank-project/2-forms/README.md) | Yohan |
| 23 | [Banking App](./7-bank-project/solution/README.md) | Methods of Fetching and Using Data | How data flows in and out of your app, how to fetch it, store it, and dispose of it | [Data](./7-bank-project/3-data/README.md) | Yohan |
| 24 | [Banking App](./7-bank-project/solution/README.md) | Concepts of State Management | Learn how your app retains state and how to manage it programmatically | [State Management](./7-bank-project/4-state-management/README.md) | Yohan |
| 25 | [AI Assistant project](./09-chat-project/README.md) | Working with AI | Learn how to build your own AI assistant | | Chris |
## 🏫 Pedagogy