Documents
Getting Started
We’ll help you get started in the simplest and most effective way. Just follow the steps below and be patient.
Create your Assistant
First, open the Assistant Settings page and create a new Assistant
Provide the key information for the Assistant. This helps the Assistant understand its role. You don’t need to provide detailed information about your business here (the full details will be covered in the Knowledge Base). It’s best to keep it as concise as possible.
Model: Choose the model type for AI. The smarter the model, the more Credits it will consume. Please refer to the PRICING page for more information on how Tokens and Credits are calculated.
Language: Select the default language for the Assistant. By default, the Assistant will detect the user’s language and respond in that language. Brand name: Your brand name.
Introduction about assistant: A brief introduction to the Assistant’s features, which will help it better understand its role and increase the accuracy of its responses to users. Keep the information as concise and to the point as possible.
Assistant starting greeting: These are the initial greetings, such as “How can I assist you?”. You can enter multiple phrases, and they will appear randomly, each separated by a line break. You can toggle this feature on or off with a switch.
Feature settings: You can require the Assistant to provide brief responses or answer questions unrelated to your business. For example, if you ask, “Who is the President of the United States?”, it will respond that it doesn’t have enough information to answer.
Lastly, train data for the Assistant.
Look at the bottom right corner, and you’ll see a chat bubble icon. You can now start chatting with the Assistant by asking some questions.
Please pay attention to the last question and answer, as the Assistant still doesn’t know your business information. Let’s move on to the next step, which is providing it with more details from the Knowledge Base.
Knowledge Base
The operating mechanism of LLM platforms does not allow for data storage. Therefore, to expand the Assistant’s knowledge, we need to provide additional information in each Q&A session. We will have a data storage system for the Assistant called the Knowledge Base. Each time the Assistant answers a user, it will take the question and search its Knowledge Base system, then use the most relevant results as the information to respond to the user’s question.
Retrieving information from the Knowledge Base is like pulling out a few pages from a book with thousands of pages to answer your question, it’s not suitable for summarizing the entire book. Understanding its limitations is key to building an Assistant that is smart enough to meet your needs.
Retrieving information from the Knowledge Base is like pulling out a few pages from a book with thousands of pages to answer your question, it’s not suitable for summarizing the entire book. Understanding its limitations is key to building an Assistant that is smart enough to meet your needs.
Create (provide data) for the Knowledge Base
Go to the Dashboard -> Assistant -> Knowledge Base. Then, input data into the Knowledge Base manually, upload files (pdf, word, excel, txt, …), or crawl data from a website by providing the URL.
Try asking the Assistant again, and you’ll get a different result than before.
- Note: use the topic feature to categorize groups of knowledge. You can bulk delete by topic to save time when updating and editing knowledge groups.
Developer
This is the advanced documentation for developers. If you’re a developer, feel free to continue.
Actions
Actions is where you set up the parameters to make requests to your server, fetching the necessary information to provide to the Assistant. This is where developers can extend the power of the Assistant and connect it to the real world.
To better understand Actions, I’ll guide you through an example of retrieving the current temperature at a specific location. Click the WEATHER button below to download a .json file containing the setup information for the Weather action. Then, go to Dashboard -> Assistant -> Developer -> Actions, click Import, and select the .json file you just downloaded. After that, you’ll see the result as shown in the image below.
JSON Schema
JSON Schema is a declarative language for defining structure and constraints for JSON data. You can refer to the JSON Schema in more detail through this link.
Actions use JSON Schema to describe the functionality of the feature to the Assistant and specify what parameters need to be created in order to send requests to the webhook and retrieve the necessary data.
Please refer to the JSON Schema structure below to guide the Assistant in inferring the two parameters: location and unit, for questions about temperature at a location, such as “Tell me the current temperature in New York”.
{
"name": "get_weather",
"parameters": {
"type": "object",
"required": [
"location",
"unit"
],
"properties": {
"unit": {
"enum": [
"c",
"f"
],
"type": "string"
},
"location": {
"type": "string"
}
},
"additionalProperties": false
},
"description": ""
}
The results will be inferred by the Assistant as follows:
{
"location": "New York",
"unit": "c"
}
Webhook Configuration
After the Assistant has obtained the necessary arguments, you need to set up the parameters for the Webhook, such as URL, Headers, and Params, so that the Assistant can make a request to your server.
- Note that you can also create variables to quickly reuse frequently repeated values. In the example above, I used the variable $API_KEY. You will create it in the Variables management page, and when you want to use it, just type $ and a list of variable names will be displayed for you to choose from.
Request
A request will be sent to your server based on the information above, similar to this curl command.
curl --location 'https://thirdparty.cremind.io/weather' \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{
"message": "tell me the current temperature in New York",
"conversation_id": "5304284b-52b7-4359-8f77-c76072173b0e",
"arguments": {
"location": "New York",
"unit": "c"
}
}'
For now, please use this URL: https://thirdparty.cremind.io/weather to run this example. I will guide you on how to create a simple server later.
Response
On your server side, when you receive a request like the one above, you need to respond with the parameters in a format that you must set up in advance so that our server correctly receives all the necessary parameters. An example of this is shown below, please pay attention to the setup in the Response section.
In this example, I will create two pieces of information to return: location and temperature. Since the response type is TEXT, the return format must adhere to the following structure.
{
"location": {
"data": "New York",
"description": "location description"
},
"temperature": {
"data": "36",
"description": "temperature description"
}
}
You can add a key called ‘description’ to provide further explanation for the type of information being returned, but you can also skip this ‘description’ key if you prefer.
Now let’s try again and see the results.
Build your own server
Now, I’ll guide you on how to create a simple server using Node.js and Python.
- Install the dependency library
npm install express body-parser
- Create an index.js file
const express = require("express");
const bodyParser = require("body-parser");
const app = express();
app.use(bodyParser.json());
app.post("/weather", (req, res) => {
const randomTemperature = Math.floor(Math.random() * 41);
return res.status(200).json({
location: {
data: req.body.arguments.location,
description: "location description",
},
temperature: {
data: randomTemperature.toString(),
description: "temperature description",
}
});
});
app.listen(8000, () => {
console.log("Server is running on http://localhost:8000");
});
- Run the application
node index.js
- Install the dependency library
pip install fastapi uvicorn
- Create an app.py file
from fastapi import FastAPI
from pydantic import BaseModel
import random
app = FastAPI()
class WeatherRequest(BaseModel):
arguments: dict
@app.post("/weather")
def get_weather(request: WeatherRequest):
random_temperature = random.randint(0, 40)
return {
"location": {
"data": request.arguments["location"],
"description": "location description",
},
"temperature": {
"data": str(random_temperature),
"description": "temperature description",
}
}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=8000)
- Run the application
python app.py
We will use ngrok to expose your service to the internet. Please note that ngrok should only be used for testing in this example. For production deployment, you need to establish a more robust server environment.
- Run ngrok
ngrok http 8000
Copy this https link https://79ca-2405-4802-9100-c6d0-4ff-1ae8-220f-bdda.ngrok-free.app and replace it with the current Webhook URL (don’t forget /weather)
Try asking the Assistant again, and you will get a result like the one below, but the response will come from your server.
API Key
Dashboard -> Assistant -> Developer -> API Key
If you are using the API provided by Cremind, you will need an API Key to make requests to Cremind.
Variables
Dashboard -> Assistant -> Developer -> Variables
You can create variables to quickly reuse frequently repeated values
Embed
Dashboard -> Assistant -> Developer -> Embed
To save you time integrating our Assistant into your website, we’ve created a Chat Widget that you can easily embed directly into your site using an iframe. You don’t even need to have in-depth coding knowledge!
The structure of a Chat Widget URL is as follows:
URL: https://chatbox.cremind.io
Params:
- api_key (required): Your app’s API Key
- theme (optional): dark/light
- conversation_id (optional): Use this if you want to store chat history for the conversation. The next time you open the Chat Widget, you will still see the previous messages for this conversation_id.
API
If you want to build an Assistant application for your product with optimal cost and without a cumbersome setup, then the Cremind API is a perfect choice. We have packaged available tools so you don’t need to build it from scratch; you can simply focus on your main goals for the product. You don’t even need to have a deep understanding of AI concepts or knowledge about Generative AI, LLMs, etc. We have taken care of that for you.
To connect to our system, please read the API documentation below, which includes instructions, explanations, and sample code for common programming languages.
Conversations
The Conversations API is used to send a message from your platform to the Cremind Assistant, it not only returns text results but also performs actions that you have set up on the Cremind dashboard.
Basic
Below is the most basic way to use the Conversations API. We utilize OpenAI’s library so that you can easily access it if you’re already familiar with the OpenAI API. If you haven’t grasped the concept and how to set up Actions, please refer back to the setup guide for Actions that I introduced earlier. If you’re already clear on that, you can skip ahead and continue with this example.
Please replace the server URL when initializing OpenAI and use your API key.
- Install the dependency library
npm install openai
- Create an index.js file
const { OpenAI } = require('openai');
const serverUrl = 'https://api.cremind.io/api/v1/conversation/';
const apiKey = 'b233c287d2fbcc1a34b9aa5f70413db58769b9f316a8f46cf55344675ad2643fc67daea09b98dbf18801918dcf4a70fa4fd3de57e6ea61fe7d426dbc533083582c4b4f8f70d0f128fe037e52df3ec6a0'
const openai = new OpenAI({
baseURL: serverUrl,
apiKey: apiKey
});
async function main() {
const completion = await openai.chat.completions.create({
messages: [
{ "role": "user", "content": "Tell me the current temperature in New York" }
],
stream: false
});
console.log(JSON.stringify(completion, null, 2));
}
main();
- Run the application
node index.js
- Install the dependency library
pip install openai
- Create an app.py file
from openai import OpenAI
import json
server_url = "https://api.cremind.io/api/v1/conversation/"
api_key = "b233c287d2fbcc1a34b9aa5f70413db58769b9f316a8f46cf55344675ad2643fc67daea09b98dbf18801918dcf4a70fa4fd3de57e6ea61fe7d426dbc533083582c4b4f8f70d0f128fe037e52df3ec6a0"
client = OpenAI(api_key=api_key, base_url=server_url)
def main():
completion = client.chat.completions.create(
messages=[
{"role": "user", "content": "Tell me the current temperature in New York"},
],
model="",
stream=False,
)
print(json.dumps(completion.to_dict(), indent=2))
if __name__ == "__main__":
main()
- Run the application
python app.py
curl "https://api.cremind.io/api/v1/conversation/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer b233c287d2fbcc1a34b9aa5f70413db58769b9f316a8f46cf55344675ad2643fc67daea09b98dbf18801918dcf4a70fa4fd3de57e6ea61fe7d426dbc533083582c4b4f8f70d0f128fe037e52df3ec6a0" \
-d '{
"messages": [
{
"role": "user",
"content": "Tell me the current temperature in New York"
}
]
}'
Response
{
"id": "6d9b5f5e-b68b-4fe6-8b42-5ae55ca8141e",
"object": "chat.completion",
"created": 1728827642573,
"model": "gpt-4o-mini",
"system_fingerprint": "",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The current temperature in New York is 7°F.",
"refusal": null
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 304,
"completion_tokens": 31,
"total_tokens": 335,
"total_credits": 0.6642
}
}
Streaming
const { OpenAI } = require('openai');
const serverUrl = 'https://api.cremind.io/api/v1/conversation/';
const apiKey = 'b233c287d2fbcc1a34b9aa5f70413db58769b9f316a8f46cf55344675ad2643fc67daea09b98dbf18801918dcf4a70fa4fd3de57e6ea61fe7d426dbc533083582c4b4f8f70d0f128fe037e52df3ec6a0'
const openai = new OpenAI({
baseURL: serverUrl,
apiKey: apiKey
});
async function main() {
const completion = await openai.chat.completions.create({
messages: [
{ "role": "user", "content": "Tell me the current temperature in New York" }
],
stream: true
});
for await (const chunk of completion) {
console.log(JSON.stringify(chunk, null, 2));
}
}
main();
from openai import OpenAI
import json
server_url = "https://api.cremind.io/api/v1/conversation/"
api_key = "b233c287d2fbcc1a34b9aa5f70413db58769b9f316a8f46cf55344675ad2643fc67daea09b98dbf18801918dcf4a70fa4fd3de57e6ea61fe7d426dbc533083582c4b4f8f70d0f128fe037e52df3ec6a0"
client = OpenAI(api_key=api_key, base_url=server_url)
def main():
completion = client.chat.completions.create(
messages=[
{"role": "user", "content": "Tell me the current temperature in New York"},
],
model="",
stream=True,
)
for chunk in completion:
print(json.dumps(chunk.to_dict(), indent=2))
if __name__ == "__main__":
main()
curl "https://api.cremind.io/api/v1/conversation/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer b233c287d2fbcc1a34b9aa5f70413db58769b9f316a8f46cf55344675ad2643fc67daea09b98dbf18801918dcf4a70fa4fd3de57e6ea61fe7d426dbc533083582c4b4f8f70d0f128fe037e52df3ec6a0" \
-d '{
"messages": [
{
"role": "user",
"content": "Tell me the current temperature in New York"
}
],
"stream": true
}'
Response
{
"id": "063eb751-cd9a-4cfd-9875-990eee2d072f",
"object": "chat.completion.chunk",
"created": 1728894321183,
"model": "gpt-4o-mini",
"system_fingerprint": "",
"choices": [
{
"index": 0,
"delta": {
"role": "assistant",
"content": "The",
"data": {
"type": 0,
"text": "The"
},
"refusal": null
},
"logprobs": null,
"finish_reason": null
}
]
}
{
"id": "063eb751-cd9a-4cfd-9875-990eee2d072f",
"object": "chat.completion.chunk",
"created": 1728894321184,
"model": "gpt-4o-mini",
"system_fingerprint": "",
"choices": [
{
"index": 0,
"delta": {
"content": " current",
"data": {
"type": 0,
"text": " current"
}
},
"logprobs": null,
"finish_reason": null
}
]
}
...
{
"id": "063eb751-cd9a-4cfd-9875-990eee2d072f",
"object": "chat.completion.chunk",
"created": 1728894321486,
"model": "gpt-4o-mini",
"system_fingerprint": "",
"choices": [
{
"index": 0,
"delta": {
"content": ".",
"data": {
"type": 0,
"text": "."
}
},
"logprobs": null,
"finish_reason": null
}
]
}
{
"id": "063eb751-cd9a-4cfd-9875-990eee2d072f",
"object": "chat.completion.chunk",
"created": 1728894321489,
"model": "gpt-4o-mini",
"system_fingerprint": "",
"choices": [
{
"index": 0,
"delta": {},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 304,
"completion_tokens": 31,
"total_tokens": 335,
"total_credits": 0.6642
}
}
Advanced
You can continue the conversation flow by using the Conversation ID. Suppose the following is the response from the previous request, the Conversation ID is the key “id”: “5304284b-52b7-4359-8f77-c76072173b0e”.
{
"id": "6d9b5f5e-b68b-4fe6-8b42-5ae55ca8141e",
"object": "chat.completion",
"created": 1728827642573,
"model": "gpt-4o-mini",
"system_fingerprint": "",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The current temperature in New York is 7°F.",
"refusal": null
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 304,
"completion_tokens": 31,
"total_tokens": 335,
"total_credits": 0.6642
}
}
Since we are trying to use OpenAI’s library, we need to include the information that the Conversations API requires in the “role”: “system”.
One very important thing to note is that you should not add any history to the messages, we have taken care of that for you. The storage of the conversation history is managed automatically, and after a period without any requests in that conversation, it will be automatically deleted. The maximum length of the messages is 2 elements: one for “role”: “system” and one for “role”: “user”.
const { OpenAI } = require("openai");
const serverUrl = "https://api.cremind.io/api/v1/conversation/";
const apiKey =
"b233c287d2fbcc1a34b9aa5f70413db58769b9f316a8f46cf55344675ad2643fc67daea09b98dbf18801918dcf4a70fa4fd3de57e6ea61fe7d426dbc533083582c4b4f8f70d0f128fe037e52df3ec6a0";
const openai = new OpenAI({
baseURL: serverUrl,
apiKey: apiKey,
});
async function main() {
const customData = {
conversation_id: "5304284b-52b7-4359-8f77-c76072173b0e",
forward_data: {
token: "abcdefghijklmnopqrstuvwxyz",
},
};
const completion = await openai.chat.completions.create({
messages: [
{ role: "system", content: JSON.stringify(customData) },
{ role: "user", content: "What temperature did you just say?" },
],
stream: false,
});
console.log(JSON.stringify(completion, null, 2));
}
main();
from openai import OpenAI
import json
server_url = "https://api.cremind.io/api/v1/conversation/"
api_key = "b233c287d2fbcc1a34b9aa5f70413db58769b9f316a8f46cf55344675ad2643fc67daea09b98dbf18801918dcf4a70fa4fd3de57e6ea61fe7d426dbc533083582c4b4f8f70d0f128fe037e52df3ec6a0"
client = OpenAI(api_key=api_key, base_url=server_url)
def main():
custom_data = {
"conversation_id": "5304284b-52b7-4359-8f77-c76072173b0e",
"forward_data": {"token": "abcdefghijklmnopqrstuvwxyz"},
}
completion = client.chat.completions.create(
messages=[
{"role": "system", "content": json.dumps(custom_data)},
{"role": "user", "content": "What temperature did you just say?"},
],
model="",
stream=False,
)
print(json.dumps(completion.to_dict(), indent=2))
if __name__ == "__main__":
main()
curl --location 'https://api.cremind.io/api/v1/conversation/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer b233c287d2fbcc1a34b9aa5f70413db58769b9f316a8f46cf55344675ad2643fc67daea09b98dbf18801918dcf4a70fa4fd3de57e6ea61fe7d426dbc533083582c4b4f8f70d0f128fe037e52df3ec6a0' \
--data '{
"messages": [
{
"role": "system",
"content": "{\"conversation_id\": \"5304284b-52b7-4359-8f77-c76072173b0e\", \"forward_data\": {\"token\": \"abcdefghijklmnopqrstuvwxyz\"}}"
},
{
"role": "user",
"content": "What temperature did you just say?"
}
]
}'
Response
{
"id": "5304284b-52b7-4359-8f77-c76072173b0e",
"object": "chat.completion",
"created": 1728909335392,
"model": "gpt-4o-mini",
"system_fingerprint": "",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "I previously stated that the temperature in New York is 7°F, but the current temperature is actually 0°F.",
"refusal": null
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 560,
"completion_tokens": 44,
"total_tokens": 604,
"total_credits": 0.7103999999999999
}
}