ai
Generate any codebase in any language, setup files for any library, or bring any idea to life—all powered by AI.
Have ideas for new features or improvements? We'd love to hear from you! Please share your thoughts by submitting a feature request using the button below.
Submit feature requestUsage
tps ai [app-name]
# or in a directory
tps ai ./path/to/dir/[app-name]
| - [app-name]/
| <files and directories AI creates>
Example
- default
- long build path
- no build path
tps ai my-express-app
tps ai projects/my-express-app
tps ai
Check out our examples section to see detailed instructions on how to use this template.
Installation
This template is not included in the templates library and must be installed separately.
npm i -g templates-mo tps-ai
# Or if you already have the templates library installed
npm i -g tps-ai
Options
name | description | option | alias | default | hidden |
---|---|---|---|---|---|
build | Description of what you want to instruct the llm to build | --build | false | ||
provider | Type of llm you want to use (openai, anthropic, azure, google, amazon-bedrock, deepseek) | --provider | openai | false | |
model | Type of llm model you want to use | --model | gpt-4o-mini | false | |
token | Api token for llm Api | --token | false | ||
baseUrl | Change the baseUrl for your AI provider | --baseUrl | true | ||
prompts | Additional prompts/instructions to pass to the AI model | --prompts | [] | true |
Configuration
To use this template, you need to know which AI provider you want to use, the AI model, and have all the required credentials. Each AI provider requires different credentials, so refer to the AI providers section to learn the correct configurations for that AI provider.
When generating a new instance, you will be prompted for your token, AI provider, and model. However, we recommend using one of the following methods to answer these questions:
- Global Config File (recommended)
- Environment Variables
- On the CLI
- Global Config (recommended)
- Environment variables
- CLI
You can define your preferred AI provider, model, and API token in your
global config file (e.g.,
~/.tps/.tpsrc
). We recommend adding this to your global config file because
your local config file is more likely to be checked into your codebase,
potentially leaking your credentials. Plus, defining these at a global level
allows you to use these credentials throughout your file system.
Run the following command to initialize your global config file if you haven't already:
tps init -g
Then add your preferred AI provider, model, and API token to ~/.tps/.tpsrc
, as
shown below:
{
"ai": {
"answers": {
"token": "<token>",
"model": "gpt-4o",
"provider": "openai"
}
}
}
Each AI provider supports a unique environment variables that you can use for
their credentials. You can refer to the AI providers section
for the provider your using to see what environment variable you can use.
However, the model
and provider
options will still need to be specified
using one of our other
methods.
When using the Templates CLI, it will load the first .env
file it finds while
walking up parent directories. You can learn more about how environments are
loaded in our env docs. You can also pass
environment variables on the command line or
set these in your shell
before running the command.
You can pass in your preferred AI provider, model, and API token when generating your new instance, as shown below:
tps ai --llm openai --model gpt-4o --token <token>
However, this is not recommended because you need to supply these values every time you run the CLI command
AI Providers
Templates supports multiple AI providers, each with its own unique setup. Below are the supported AI providers and how to configure them.
Templates uses Vercel's AI SDK to interact with the AI
providers. Currently, we only support a subset of the
providers that the AI SDK
supports. The model
you choose needs to support object generation in order to
use it with this template. If you would like to see a provider that we don't
support, drop us a
request.
Templates does not support passing options to the AI models at the moment. However, this is a feature that we are looking to support in the future.
OpenAI
Supported models: https://sdk.vercel.ai/providers/ai-sdk-providers/openai#model-capabilities
Environment variable: OPENAI_API_KEY
Docs: https://sdk.vercel.ai/providers/ai-sdk-providers/openai
API key Guide: https://medium.com/@duncanrogoff/how-to-get-your-openai-api-key-in-2024-the-complete-guide-6b52d82c7362
To use OpenAI, you will need to have an API key. You can get one by following the OpenAI API key guide.
Once you have your API key, you can configure it using the
token
option. This provider also supports specifying your API token
using
the OPENAI_API_KEY
environment variable.
Anthropic (alpha)
Supported models: https://sdk.vercel.ai/providers/ai-sdk-providers/anthropic#model-capabilities
Environment variable: ANTHROPIC_API_KEY
Docs: https://sdk.vercel.ai/providers/ai-sdk-providers/anthropic
API key docs: https://docs.anthropic.com/en/api/getting-started#accessing-the-api
To use Anthropic, you will need to have an API key. You can get one by following the Anthropic API key docs.
Once you have your API key, you can configure it using the
token
option. This provider also supports specifying your API token
using
the ANTHROPIC_API_KEY
environment variable.
Amazon Bedrock (alpha)
Supported models: https://sdk.vercel.ai/providers/ai-sdk-providers/amazon-bedrock#model-capabilities
Environment variables: AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
,
AWS_REGION
Docs: https://sdk.vercel.ai/providers/ai-sdk-providers/amazon-bedrock
API key docs: https://sdk.vercel.ai/providers/ai-sdk-providers/amazon-bedrock#authentication
This provider requires the AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
,
AWS_REGION
environment variables.
The token
and baseUrl
options are not supported by this provider.
To use Amazon Bedrock you will need an access key, secret access key, and the aws region. You can get these by following the Amazon Bedrock authentication docs.
Once you have your credentials, you can configure them using
environment variables. You can provided these environment variables by following
the environment variables tab in configuration section. You
need to provide all the folllowing environment variables AWS_ACCESS_KEY_ID
,
AWS_SECRET_ACCESS_KEY
, AWS_REGION
for this provider to work.
Google (alpha)
Supported models: https://sdk.vercel.ai/providers/ai-sdk-providers/google-generative-ai#model-capabilities
Environment variable: GOOGLE_GENERATIVE_AI_API_KEY
Docs: https://sdk.vercel.ai/providers/ai-sdk-providers/google-generative-ai
API key docs: https://ai.google.dev/gemini-api/docs/api-key
To use Google, you will need to have an API key. You can get one by following the Google API key docs.
Once you have your API key, you can configure it using the
token
option. This provider also supports specifying your API token
using
the GOOGLE_GENERATIVE_AI_API_KEY
environment variable.
Deepseek (alpha)
Supported models: https://sdk.vercel.ai/providers/ai-sdk-providers/deepseek#model-capabilities
Env variable: DEEPSEEK_API_KEY
Docs: https://sdk.vercel.ai/providers/ai-sdk-providers/deepseek
API key docs: https://api-docs.deepseek.com/
To use Deepseek, you will need to have an API key. You can get one by following the Deepseek API key docs.
Once you have your API key, you can configure it using the
token
option. This provider also supports specifying your API token
using
the DEEPSEEK_API_KEY
environment variable.
Azure (alpha)
Env variable: AZURE_API_KEY
, AZURE_RESOURCE_NAME
Docs: https://sdk.vercel.ai/providers/ai-sdk-providers/azure
API key docs: https://docs.mindmac.app/how-to.../add-api-key/create-azure-openai-api-key
This providers requires the AZURE_RESOURCE_NAME
environment variable.
Use the model
option to specify azures deployment name.
To use Azure, you will need to have an API key, resource name, and deployment name. You can get theses by following the Azure API key docs.
Once you have your API key, you can configure it using the
token
option. This provider also supports specifying your API token
using
the AZURE_API_KEY
environment variable.
To use this provider, you also need to supply a azure resource name via the
AZURE_RESOURCE_NAME
environment variable. You can provided these environment
variables by following the environment variables tab in
configuration section.
Examples
How to use
You can use our AI template to generate anything your mind desires. If you haven't already, please configure your AI provider before using this template.
Once configured, you're ready to go. This template can be used in two ways:
- With a build path
- Without a build path
The build path is the name and location you provide for your new instance. Templates uses it to create a directory and render its contents there.
example:
tps ai food-app
# |_______|
# ^ Build path and instance name
tps ai ./project/food-app
# |________| |______|
# ^ location ^ instance name
# |_________________|
# ^ build path
You can learn more about build paths in the build path docs.
Build path
When generating a new instance with a build path, Template's will create a new directory based on this build path and put any files or folders the LLM creates into this directory.
Example:
If you want to build an Express app for your new food app idea named food-app
,
you can run the following command:
tps ai food-app
? What would you like to build? Create the contents of an Express app. I want all the bells and whistles. It should have a MongoDB connection as well.
This will generate a food-app
directory in your current working directory and
place all files and directories the AI creates inside it.
| - food-app/
| - <files and directories created by LLM...>
No Build Path
When generating a new instance with no build path, all files or folders the AI creates will be placed into your current working directory. This is useful if you want content to be generated inside your current working directory or when you want the AI to generate a directory for you. However, you need to explicitly tell the AI to do this.
Example:
If you have an existing repo and want to add Jest support, placing all unit
tests inside a __tests__
directory:
tps ai
? What would you like to build? Create a jest.config.js file that will run my Node library. Its all written in JavaScript, and at least one test file should live in the `__tests__` folder.
This will generate a __tests__
directory and a jest.config.js
file inside
your current working directory.
| - <cwd>/
| - __tests__/
| - <test files and directories created by LLM...>
| - jest.config.js
The more specific you are with the AI, the better outcome you will get.
For example:
- Vague: "Create an API."
- Specific: "Create a RESTful API using Express with routes for user authentication and a MongoDB connection."
Providing clear details helps the AI understand your exact needs and deliver more accurate results.
Base Url
The baseUrl
option lets you change the base url that this template sends the
request to. This is useful when you want to interact with different environments
or your custom AI instance. Most AI providers supports this base URL option but
always refer to the provider's docs to make sure.
- Config
- CLI
{
"ai": {
"answers": {
"baseUrl": "https://my-custom-ai.com/api"
}
}
}
tps ai --baseUrl https://my-custom-ai.com/api
Additional Prompts
The prompts
option lets you add additional prompts to the AI in order to
customize it or give it specific instructions.
- Config
- CLI
{
"ai": {
"answers": {
"prompts": ["I use mjs for my js files"]
}
}
}
tps ai --prompts "I use the .mjs extension for my javascript files"
Now when the AI generates JavaScript files, it will use the .mjs
extension
instead of .js
.
Templates does not extend the prompts
option, and any values defined higher in
your filesystem will be overridden.
example:
If you declare prompts
in your
global config file and some in
your repo's config file, then Templates will only use the value from your repo's
config file and not your global config. We are actively looking into solutions
to support extending template options.
Inspiration
Need some inspiration on what you can do? Here are some examples:
-
tps ai
: Generate a README file for my library. This will be published on GitHub, so include common sections and best practices for a Node.js library. -
tps ai
: Create a jest.config.js file to run tests for my Node.js library, which is currently written in JavaScript. Include at least one test file located in the __tests__ folder. -
tps ai
: Create a GitHub Action file for my Node.js library. The action, named ci, should run npm test using Node.js v18. It should trigger on pull requests when I push a commit. -
tps ai <app-name>
: Create the contents of a full-featured Express app called food-app, with all the bells and whistles. Include a MongoDB connection as part of the setup.