AWS examples for a makeathon
π Check out this document Quick Start Guide
- Login
- Roles & Permissions
- Access Keys
- Sagemaker AI Platform & Jupyter Notebooks
- S3 Storage
All TypeScript examples (Bedrock, S3, S3 Vectors, LangChain, RAG) are in the typescript/ folder with their own setup, docs, and README.
Quick overview of what's inside:
| Script | File | What it does |
|---|---|---|
npm run verify |
src/verify.ts |
Check your credentials work |
npm run bedrock |
src/bedrock.ts |
Invoke any Bedrock model (simple + streaming) |
npm run s3 |
src/s3.ts |
Upload / download / list S3 objects |
npm run rag |
src/rag.ts |
Full RAG pipeline with S3 Vectors (raw SDK) |
npm run langchain |
src/langchain-rag.ts |
RAG with LangChain + Bedrock |
Make sure you never store access keys in a public location! In the python/py folder you can find example files for s3 and Bedrock access as well.
If you run the example files locally you should follow these steps!
- Create a virtual python environment
python3 -m venv .venv - Activate the virtual environment
source .venv/bin/activate - Install the required libraries
pip install -r requirements.txt
Source: https://docs.python.org/3/library/venv.html
- Create an AWS Access key Link
- Create a copy of the
.env.examplefile and name it.env - Store the
Key IDand theKey Secretin the.envfile
WARNING Make sure you NEVER add these keys to a public repository!
With minor adjustments you can run all the examples on AWS Sagemaker Notebooks. This makes the setup easier in many cases, as it integrates very well with the AWS environment and other services.
Checkout the S3_Example.ipynb notebook. β
Checkout the Bedrock_Example.ipynb notebook. β
Check out the RAG_agent_example repository to find a simple langgraph agent using s3vectors to run similarity queries.
There are example files to access bedrock and s3 from .py files as well under /python/py/
If you're using an AI coding assistant (Cursor, Windsurf, Claude Code, GitHub Copilot, etc.), you can give it direct access to the latest LangChain documentation via their MCP server. This means your assistant will give you accurate, up-to-date LangChain code instead of hallucinating outdated APIs.
MCP Server URL:
https://docs.langchain.com/mcp
Claude Code:
claude mcp add --transport http docs-langchain https://docs.langchain.com/mcpCursor / Windsurf β add to your MCP settings (.cursor/mcp.json or equivalent):
{
"mcpServers": {
"langchain-docs": {
"type": "http",
"url": "https://docs.langchain.com/mcp"
}
}
}Once connected, your assistant can search LangChain, LangGraph, and LangSmith docs in real time. More details: docs.langchain.com/use-these-docs
- Always use
eu.inference profile IDs for Bedrock models to keep data in EU regions. Here's anyway all models you can choose from and their inference profile IDs Bedrock Inference Profiles - Don't commit your keys and don't share them publicly
- S3 bucket names must be lowercase β only letters, numbers, and hyphens, globally unique