OpenAI ChatGPT integration guide
This plugin is only available as a paid add-on to a TinyMCE subscription. |
Introduction
This guide provides instructions for integrating the AI Assistant plugin using OpenAI ChatGPT in TinyMCE.
To learn more about the difference between string and streaming responses, see The respondWith
object on the plugin page.
Prerequisites
Before you begin, you need the following:
-
An OpenAI API Key. To get an API key, sign up for an account on the OpenAI Platform.
The following examples are intended to show how to use the authentication credentials with the API within the client side integration. This is not recommended for production purposes. It is recommended to only access the API with a proxy server or by implementing a server-side integration to prevent unauthorized access to the API. |
String response
This example demonstrates how to integrate the AI Assistant plugin with the OpenAI API to generate string responses.
// This example stores the API key in the client side integration. This is not recommended for any purpose.
// Instead, an alternate method for retrieving the API key should be used.
const api_key = '<INSERT_API_KEY_HERE>';
const ai_request = (request, respondWith) => {
const openAiOptions = {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${api_key}`
},
body: JSON.stringify({
model: 'gpt-4o',
temperature: 0.7,
max_tokens: 800,
messages: [{ role: 'user', content: request.prompt }],
})
};
respondWith.string((signal) => window.fetch('https://api.openai.com/v1/chat/completions', { signal, ...openAiOptions })
.then(async (response) => {
if (response) {
const data = await response.json();
if (data.error) {
throw new Error(`${data.error.type}: ${data.error.message}`);
} else if (response.ok) {
// Extract the response content from the data returned by the API
return data?.choices[0]?.message?.content?.trim();
}
} else {
throw new Error('Failed to communicate with the ChatGPT API');
}
})
);
};
tinymce.init({
selector: 'textarea', // Change this value according to your HTML
plugins: 'ai',
toolbar: 'aidialog aishortcuts',
ai_request
});
Streaming response
This example demonstrates how to integrate the AI Assistant plugin with the OpenAI API to generate streaming responses.
const fetchApi = import("https://unpkg.com/@microsoft/fetch-event-source@2.0.1/lib/esm/index.js").then(module => module.fetchEventSource);
// This example stores the API key in the client side integration. This is not recommended for any purpose.
// Instead, an alternate method for retrieving the API key should be used.
const api_key = '<INSERT_API_KEY_HERE>';
const ai_request = (request, respondWith) => {
respondWith.stream((signal, streamMessage) => {
// Adds each previous query and response as individual messages
const conversation = request.thread.flatMap((event) => {
if (event.response) {
return [
{ role: 'user', content: event.request.query },
{ role: 'assistant', content: event.response.data }
];
} else {
return [];
}
});
// System messages provided by the plugin to format the output as HTML content.
const pluginSystemMessages = request.system.map((content) => ({
role: 'system',
content
}));
const systemMessages = [
...pluginSystemMessages,
// Additional system messages to control the output of the AI
{ role: 'system', content: 'Remove lines with ``` from the response start and response end.' }
]
// Forms the new query sent to the API
const content = request.context.length === 0 || conversation.length > 0
? request.query
: `Question: ${request.query} Context: """${request.context}"""`;
const messages = [
...conversation,
...systemMessages,
{ role: 'user', content }
];
const requestBody = {
model: 'gpt-4o',
temperature: 0.7,
max_tokens: 800,
messages,
stream: true
};
const openAiOptions = {
signal,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${api_key}`
},
body: JSON.stringify(requestBody)
};
const onopen = async (response) => {
if (response) {
const contentType = response.headers.get('content-type');
if (response.ok && contentType?.includes('text/event-stream')) {
return;
} else if (contentType?.includes('application/json')) {
const data = await response.json();
if (data.error) {
throw new Error(`${data.error.type}: ${data.error.message}`);
}
}
} else {
throw new Error('Failed to communicate with the ChatGPT API');
}
};
// This function passes each new message into the plugin via the `streamMessage` callback.
const onmessage = (ev) => {
const data = ev.data;
if (data !== '[DONE]') {
const parsedData = JSON.parse(data);
const firstChoice = parsedData?.choices[0];
const message = firstChoice?.delta?.content;
if (message) {
streamMessage(message);
}
}
};
const onerror = (error) => {
// Stop operation and do not retry by the fetch-event-source
throw error;
};
// Use microsoft's fetch-event-source library to work around the 2000 character limit
// of the browser `EventSource` API, which requires query strings
return fetchApi
.then(fetchEventSource =>
fetchEventSource('https://api.openai.com/v1/chat/completions', {
...openAiOptions,
openWhenHidden: true,
onopen,
onmessage,
onerror
})
)
.then(async (response) => {
if (response && !response.ok) {
const data = await response.json();
if (data.error) {
throw new Error(`${data.error.type}: ${data.error.message}`);
}
}
})
.catch(onerror);
});
};
tinymce.init({
selector: 'textarea', // change this value according to your HTML
plugins: 'ai',
toolbar: 'aidialog aishortcuts',
ai_request
});