First LLM Call (Hello World)
When we talk to an AI via code, we aren't just sending text; we are constructing a Payload.
Model: Specifies which "brain" to use (e.g., llama3.2, gpt-4).
llama3.2
gpt-4
Messages: A list of dictionaries representing the conversation history.
role: "user" → My input.
role: "user"
role: "assistant" → The AI's previous answers.
role: "assistant"
role: "system" → Instructions on how the AI should behave (we will cover this tomorrow).
role: "system"
The AI doesn't just return a string. It returns a JSON Object containing:
model: The name of the model used.
model
created_at: Timestamp.
created_at
message: The actual content (this is what we extract).
message
done: Boolean status.
done
I chose to use Ollama for this stage because:
Zero Latency: No internet lag.
Zero Cost: I can test a loop of 100 calls without paying $0.01.
Privacy: My prompts (which might eventually contain private data) never leave my machine.
SDK Installed (pip install ollama)
pip install ollama
Python Script day5_llm_call.py Created
day5_llm_call.py
Successful Response Received
Last updated 4 days ago