AI Assistant plugin
This plugin is only available as a paid add-on to a TinyMCE subscription. |
This feature is only available for TinyMCE 6.6 and later. |
The AI Assistant plugin allows a user to interact with registered AI APIs by sending queries and viewing responses within a TinyMCE editor dialog.
Once a response is generated and displayed within the dialog, the user can choose to either:
-
insert it into the editor at the current selection;
-
create another query to further refine the response generated by the AI; or
-
close the dialog and discard the returned response.
Users can retrieve a history of their conversations with the AI using the getThreadLog
API, including any discarded responses.
Interactive example
This example uses a proxy endpoint to communicate with the OpenAI API. This is done to avoid exposing the API key in the client-side code. For more information on using a proxy server with the AI Assistant plugin, see the AI Proxy Server reference guide. |
-
TinyMCE
-
HTML
-
JS
<textarea id="ai">
<h1 class="p1"><span class="s1">🤖</span><span class="s2"><strong> Try out AI Assistant!</strong></span></h1>
<p class="p2"><span class="s2">Below are just a few of the ways you can use AI Assistant within your app. Since you can define your own custom prompts, the sky really is the limit!</span></p>
<p class="p2"><span class="s2"><strong> </strong></span><span class="s3">🎭</span><span class="s2"><strong> Changing tone </strong>–<strong> </strong>Lighten up the sentence below by selecting the text, clicking <img src="../_images/ai-plugin/wand-icon.svg" width="20" height="20"/>, and choosing <em>Change tone > Friendly</em>.</span></p>
<blockquote>
<p class="p2"><span class="s2">The 3Q23 financial results followed a predictable trend, reflecting the status quo from previous years.</span></p>
</blockquote>
<p class="p2"><span class="s3">📝</span><span class="s2"><strong> Summarizing </strong>– Below is a long paragraph that people may not want to read from start to finish. Get a quick summary by selecting the text, clicking <img src="../_images/ai-plugin/wand-icon.svg" width="20" height="20"/>, and choosing <em>Summarize content</em>.</span></p>
<blockquote>
<p class="p2"><span class="s2">Population growth in the 17th century was marked by significant increment in the number of people around the world. Various factors contributed to this demographic trend. Firstly, advancements in agriculture and technology resulted in increased food production and improved living conditions. This led to decreased mortality rates and better overall health, allowing for more individuals to survive and thrive. Additionally, the exploration and expansion of European powers, such as colonization efforts, fostered migration and settlement in new territories.</span></p>
</blockquote>
<p class="p2"><span class="s3">💡</span><span class="s2"><strong> Writing from scratch</strong> – Ask AI Assistant to generate content from scratch by clicking <img src="../_images/ai-plugin/ai-icon.svg" width="20" height="20"/>, and typing <em>Write a marketing email announcing TinyMCE's new AI Assistant plugin</em>.</span></p>
</textarea>
const fetchApi = import(
'https://unpkg.com/@microsoft/fetch-event-source@2.0.1/lib/esm/index.js'
).then((module) => module.fetchEventSource);
// This example stores the API key in the client side integration. This is not recommended for any purpose.
// Instead, an alternate method for retrieving the API key should be used.
const api_key = '<INSERT_API_KEY_HERE>';
const ai_request = (request, respondWith) => {
respondWith.stream((signal, streamMessage) => {
// Adds each previous query and response as individual messages
const conversation = request.thread.flatMap((event) => {
if (event.response) {
return [
{ role: 'user', content: event.request.query },
{ role: 'assistant', content: event.response.data },
];
} else {
return [];
}
});
// System messages provided by the plugin to format the output as HTML content.
const systemMessages = request.system.map((content) => ({
role: 'system',
content,
}));
// Forms the new query sent to the API
const content =
request.context.length === 0 || conversation.length > 0
? request.query
: `Question: ${request.query} Context: """${request.context}"""`;
const messages = [
...conversation,
...systemMessages,
{ role: 'user', content },
];
let hasHead = false;
let markdownHead = '';
const hasMarkdown = (message) => {
if (message.includes('`') && markdownHead !== '```') {
const numBackticks = message.split('`').length - 1;
markdownHead += '`'.repeat(numBackticks);
if (hasHead && markdownHead === '```') {
markdownHead = '';
hasHead = false;
}
return true;
} else if (message.includes('html') && markdownHead === '```') {
markdownHead = '';
hasHead = true;
return true;
}
return false;
};
const requestBody = {
model: 'gpt-4o',
temperature: 0.7,
max_tokens: 4000,
messages,
stream: true,
};
const openAiOptions = {
signal,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${api_key}`,
},
body: JSON.stringify(requestBody),
};
const onopen = async (response) => {
if (response) {
const contentType = response.headers.get('content-type');
if (response.ok && contentType?.includes('text/event-stream')) {
return;
} else if (contentType?.includes('application/json')) {
const data = await response.json();
if (data.error) {
throw new Error(
`${data.error.type}: ${data.error.message}`
);
}
}
} else {
throw new Error('Failed to communicate with the ChatGPT API');
}
};
// This function passes each new message into the plugin via the `streamMessage` callback.
const onmessage = (ev) => {
const data = ev.data;
if (data !== '[DONE]') {
const parsedData = JSON.parse(data);
const firstChoice = parsedData?.choices[0];
const message = firstChoice?.delta?.content;
if (message && message !== '') {
if (!hasMarkdown(message)) {
streamMessage(message);
}
}
}
};
const onerror = (error) => {
// Stop operation and do not retry by the fetch-event-source
throw error;
};
// Use microsoft's fetch-event-source library to work around the 2000 character limit
// of the browser `EventSource` API, which requires query strings
return fetchApi
.then((fetchEventSource) =>
fetchEventSource('https://api.openai.com/v1/chat/completions', {
...openAiOptions,
openWhenHidden: true,
onopen,
onmessage,
onerror,
})
)
.then(async (response) => {
if (response && !response.ok) {
const data = await response.json();
if (data.error) {
throw new Error(
`${data.error.type}: ${data.error.message}`
);
}
}
})
.catch(onerror);
});
};
tinymce.init({
selector: 'textarea', // change this value according to your HTML
plugins: 'ai advlist anchor autolink charmap advcode emoticons fullscreen help image link lists media preview searchreplace table',
toolbar: 'undo redo | aidialog aishortcuts | styles fontsizeinput | bold italic | align bullist numlist | table link image | code',
height: 650,
ai_request,
});
Basic setup
To add the AI Assistant plugin to the editor, follow these steps:
-
Add
ai
to theplugins
option in the editor configuration. -
Add the
ai_request
function to the editor configuration.
For example:
tinymce.init({
selector: 'textarea', // change this value according to your HTML
plugins: 'ai',
toolbar: 'aidialog aishortcuts',
ai_request: <AI_REQUEST_FUNCTION>,
});
Using a proxy server with AI Assistant
As per OpenAI’s best practices for API key safety, deployment of an API key in a client-side environment is specifically not recommended. |
Using a proxy server obviates this, reducing financial and service uptime risks.
A proxy server can also provide flexibility by allowing extra processing before the request is sent to an LLM AI endpoint and before returning the response to the user.
See the AI Proxy Server reference guide for information on how to setup a proxy server for use with the AI Assistant.
The AI Proxy Server reference guide is, as its name notes, a reference. There is no single proxy server setup that is right or correct for all circumstances and other setups may be better for your use-case. |
Options
The following configuration options affect the behavior of the AI Assistant plugin.
ai_request
The AI Assistant uses the ai_request
function to send prompts to an AI endpoint, and display the responses.
The ai_request
function will be called each time a user submits a prompt.
These prompts are only submitted with the AI Assistant dialog open, whether from typing in the dialog input field, or from using an AI Assistant shortcut.
The content returned within the ai_request
function is displayed within the dialog, once a response is provided.
This option is required to use the AI Assistant plugin. |
Type: Function
Example: using ai_request
to interface with the OpenAI Completions API
// This example stores the API key in the client side integration. This is not recommended for any purpose.
// Instead, an alternate method for retrieving the API key should be used.
const api_key = '<INSERT_API_KEY_HERE>';
const ai_request = (request, respondWith) => {
const openAiOptions = {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${api_key}`
},
body: JSON.stringify({
model: 'gpt-4o',
temperature: 0.7,
max_tokens: 800,
messages: [{ role: 'user', content: request.prompt }],
})
};
respondWith.string((signal) => window.fetch('https://api.openai.com/v1/chat/completions', { signal, ...openAiOptions })
.then(async (response) => {
if (response) {
const data = await response.json();
if (data.error) {
throw new Error(`${data.error.type}: ${data.error.message}`);
} else if (response.ok) {
// Extract the response content from the data returned by the API
return data?.choices[0]?.message?.content?.trim();
}
} else {
throw new Error('Failed to communicate with the ChatGPT API');
}
})
);
};
tinymce.init({
selector: 'textarea', // Change this value according to your HTML
plugins: 'ai',
toolbar: 'aidialog aishortcuts',
ai_request
});
The request
object
The ai_request
function is given a request object as the first parameter, which has these fields:
query
-
The user-submitted prompt as a string, without any context. This is either the text as written by the user in the AI Assistant dialog, or the
prompt
as written in the shortcut object, when selected by the user from the shortcuts menu. context
-
The current selection as a string, if any, or the current response displayed in the dialog. This can be combined with the `query`in a custom manner by the integrator to form a request. The current selection will be provided in HTML format, as will any displayed HTML response, and will increase token use.
thread
-
An array containing the history of requests and responses within the dialog, provided as an array of objects. This thread array is the same as is recorded in the
getThreadLog
API, for current instance of the AI Assistant dialog. system
-
An array of messages which provide instructions for handling the user prompts. The
system
array:
[ 'Answer the question based on the context below.',
'The response should be in HTML format.',
'The response should preserve any HTML formatting, links, and styles in the context.' ]
prompt
-
The submitted prompt as a string, combined with any current selection (when first opening the dialog) or the previous response. The AI Assistant plugin provides a customised format which combines these strings, though integrators are free to build their own with any of the other provided fields in the
request
object.
The default prompt and token use.
The AI Assistant automatically prepends the
This string is intended to improve the UX and increases the response accuracy, and simplify the initial integration of the AI Assistant plugin. However, this string uses more tokens than the |
The respondWith
object
The ai_request
function provides an object containing two separate callbacks as the second parameter. These callbacks allow the integrator to choose how the response from the API will be displayed in the AI Assistant dialog.
Both of these callbacks expect a Promise
which indicates that the response is either finished (when resolved), or interrupted (when rejected). The return type of the promise differs between callbacks.
Both callbacks provide a signal
parameter.
signal
-
If the user closes the dialog, or aborts a streaming response, the
signal
parameter can abort the request.
The respondWith.string
callback
The respondWith.string
callback provides functionality for displaying the entire response from the AI.
The final response is to be returned as a string using Promise.resolve()
. This string will be displayed within the AI Assistant dialog.
The respondWith.stream
callback
The respondWith.stream
callback provides functionality for displaying streamed responses from the AI.
This callback expects a Promise
which resolves once the AI has finished streaming the response.
This callback provides streamMessage
callback as the second parameter, which should be called on each new partial message so the message can be displayed in the AI Assistant dialog immediately.
streamMessage
-
Takes a string and appends it to the content displayed in the AI Assistant dialog.
ai_shortcuts
The ai_shortcuts
option controls the list of AI Assistant shortcuts available in the AI Shortcuts
toolbar button and menu item.
This option can be configured with an array to present a customised set of AI Assistant shortcuts.
As well, it can be set to a Boolean
value to control the use of the default list of AI Assistant shortcuts.
When not specified, or set to true
, the AI Assistant shortcuts toolbar button and menu item present and display the default set of shortcuts included with the AI Assistant.
When set to []
(an empty array), or false
, the AI Assistant shortcuts toolbar button and menu item do not present in the TinyMCE instance.
When configured with an instance-specific object array, the AI Assistant shortcuts toolbar button and menu item present, and display the configured shortcuts when activated.
Type: Array
of Objects
, or Boolean
Default value:
[
{ title: 'Summarize content', prompt: 'Provide the key points and concepts in this content in a succinct summary.', selection: true },
{ title: 'Improve writing', prompt: 'Rewrite this content with no spelling mistakes, proper grammar, and with more descriptive language, using best writing practices without losing the original meaning.', selection: true },
{ title: 'Simplify language', prompt: 'Rewrite this content with simplified language and reduce the complexity of the writing, so that the content is easier to understand.', selection: true },
{ title: 'Expand upon', prompt: 'Expand upon this content with descriptive language and more detailed explanations, to make the writing easier to understand and increase the length of the content.', selection: true },
{ title: 'Trim content', prompt: 'Remove any repetitive, redundant, or non-essential writing in this content without changing the meaning or losing any key information.', selection: true },
{
title: 'Change tone', subprompts: [
{ title: 'Professional', prompt: 'Rewrite this content using polished, formal, and respectful language to convey professional expertise and competence.', selection: true },
{ title: 'Casual', prompt: 'Rewrite this content with casual, informal language to convey a casual conversation with a real person.', selection: true },
{ title: 'Direct', prompt: 'Rewrite this content with direct language using only the essential information.', selection: true },
{ title: 'Confident', prompt: 'Rewrite this content using compelling, optimistic language to convey confidence in the writing.', selection: true },
{ title: 'Friendly', prompt: 'Rewrite this content using friendly, comforting language, to convey understanding and empathy.', selection: true },
]
},
{
title: 'Change style', subprompts: [
{ title: 'Business', prompt: 'Rewrite this content as a business professional with formal language.', selection: true },
{ title: 'Legal', prompt: 'Rewrite this content as a legal professional using valid legal terminology.', selection: true },
{ title: 'Journalism', prompt: 'Rewrite this content as a journalist using engaging language to convey the importance of the information.', selection: true },
{ title: 'Medical', prompt: 'Rewrite this content as a medical professional using valid medical terminology.', selection: true },
{ title: 'Poetic', prompt: 'Rewrite this content as a poem using poetic techniques without losing the original meaning.', selection: true },
]
},
{
title: 'Translate', subprompts: [
{ title: 'Translate to English', prompt: 'Translate this content to English language.', selection: true },
{ title: 'Translate to Spanish', prompt: 'Translate this content to Spanish language.', selection: true },
{ title: 'Translate to Portuguese', prompt: 'Translate this content to Portuguese language.', selection: true },
{ title: 'Translate to German', prompt: 'Translate this content to German language.', selection: true },
{ title: 'Translate to French', prompt: 'Translate this content to French language.', selection: true },
{ title: 'Translate to Norwegian', prompt: 'Translate this content to Norwegian language.', selection: true },
{ title: 'Translate to Ukrainian', prompt: 'Translate this content to Ukrainian language.', selection: true },
{ title: 'Translate to Japanese', prompt: 'Translate this content to Japanese language.', selection: true },
{ title: 'Translate to Korean', prompt: 'Translate this content to Korean language.', selection: true },
{ title: 'Translate to Simplified Chinese', prompt: 'Translate this content to Simplified Chinese language.', selection: true },
{ title: 'Translate to Hebrew', prompt: 'Translate this content to Hebrew language.', selection: true },
{ title: 'Translate to Hindi', prompt: 'Translate this content to Hindi language.', selection: true },
{ title: 'Translate to Arabic', prompt: 'Translate this content to Arabic language.', selection: true },
]
},
]
Translations and changes
The default AI Assistant shortcuts are only available in English. They have not been translated into any other languages, and switching TinyMCE to a language other than English does not change the default AI Assistant shortcuts. Also, the default AI Assistant shortcuts are subject to change. If you prefer to keep these shortcuts, include them within your integration. |
Example: using ai_shortcuts
to present a customized set of AI Assistant shortcuts
tinymce.init({
selector: 'textarea', // change this value according to your html
plugins: 'ai',
toolbar: 'aidialog aishortcuts',
ai_request: (request, respondWith) => respondWith.string(() => Promise.reject("See docs to implement AI Assistant")),
ai_shortcuts: [
{ title: 'Screenplay', prompt: 'Convert this to screenplay format.', selection: true },
{ title: 'Stage play', prompt: 'Convert this to stage play format.', selection: true },
{ title: 'Classical', subprompts:
[
{ title: 'Dialogue', prompt: 'Convert this to a Socratic dialogue.', selection: true },
{ title: 'Homeric', prompt: 'Convert this to a Classical Epic.', selection: true }
]
},
{ title: 'Celtic', subprompts:
[
{ title: 'Bardic', prompt: 'Convert this to Bardic verse.', selection: true },
{ title: 'Filí', prompt: 'Convert this to Filí-an verse.', selection: true }
]
},
]
});
Example: disabling ai_shortcuts
To disable the AI Assistant shortcuts menu and toolbar options, set ai_shortcuts
to false
(or to []
, an empty array).
tinymce.init({
selector: 'textarea', // change this value according to your HTML
ai_shortcuts: false
});
tinymce.init({
selector: 'textarea', // change this value according to your HTML
ai_shortcuts: []
});
Valid Shortcuts
Valid shortcut objects contain the following properties.
title
-
A string which is displayed in the
aishortcuts
toolbar button and menu item. This will indicate which shortcut is used, or which category of shortcuts are in this menu.
And either
subprompts
-
An array containing more valid shortcut objects. This allows shortcuts to be grouped into categories within the AI Assistant shortcuts toolbar button and menu item.
or
prompt
-
A string containing the query which is given to the
ai_request
function when the shortcut is used.
The selection
property
This feature is only available for TinyMCE 6.8 and later. |
Shortcut objects with the prompt
property may also contain the following optional property.
selection
-
A boolean value which will match to the current selection and set the enabled state of the shortcut. When
selection
is:-
true
: The shortcut will only be enabled when the user has made a selection in the editor. -
false
: The shortcut will only be enabled when the user has not made a selection in the editor. -
undefined
, or not set: The shortcut will always be enabled.
-
This property allows the definition of shortcuts which should only operate when the user has selected content, requiring the selection as context to the AI when the property is true
. Additionally, shortcuts which are intended to generate specific content will not be enabled with any selection when the property if false
.
Toolbar buttons
The AI Assistant plugin provides the following toolbar buttons:
Toolbar button identifier | Description |
---|---|
|
Open the AI Assistant dialog. |
|
Opens the AI Shortcuts menu, displaying the available shortcut prompts for querying the AI API. |
These toolbar buttons can be added to the editor using:
-
The
toolbar
configuration option. -
The
quickbars_insert_toolbar
configuration option.
Menu items
The AI Assistant plugin provides the following menu items:
Menu item identifier | Default Menu Location | Description |
---|---|---|
|
Tools |
Open the AI Assistant dialog. |
|
Tools |
Opens the AI Assistant shortcuts sub-menu, displaying the available shortcut prompts for querying the AI API. |
These menu items can be added to the editor using:
-
The
menu
configuration option. -
The
contextmenu
configuration option.
Commands
The AI Assistant plugin provides the following TinyMCE commands.
Command | Description |
---|---|
mceAiDialog |
This command opens the AI Assistant dialog. For details, see Using |
mceAiDialogClose |
This command closes the AI Assistant dialog. |
tinymce.activeEditor.execCommand('mceAiDialog');
tinymce.activeEditor.execCommand('mceAiDialog', true|false, { prompt: '<value1>', generate: true, display: false });
tinymce.activeEditor.execCommand('mceAiDialogClose');
Using mceAiDialog
mceAiDialog
accepts an object with any of the following key-value pairs:
Name | Value | Requirement | Description |
---|---|---|---|
prompt |
|
Not required |
The prompt to pre-fill the input field with when the dialog is first opened. |
generate |
|
Not required |
Whether a request should be sent when the dialog is first opened. |
display |
|
Not required |
Whether to display the input field and generate button in the dialog when the dialog is first opened. |
Events
The AI Assistant plugin provides the following events.
The following events are provided by the AI Assistant plugin.
Name | Data | Description |
---|---|---|
AIRequest |
|
Fired when a request is sent to the |
AIResponse |
|
Fired when an |
AIError |
|
Fired when an |
AIDialogOpen |
N/A |
Fired when the AI Assistant dialog is opened. |
AIDialogClose |
N/A |
Fired when the AI Assistant dialog is closed. |
APIs
The AI Assistant plugin provides the following APIs.
Name | Arguments | Description |
---|---|---|
getThreadLog |
N/A |
Retrieves the history of each conversation thread generated while using the plugin. |
// Retrieves the history of each conversation thread generated while using the plugin in the active editor.
tinymce.activeEditor.plugins.ai.getThreadLog();
The getThreadLog
API
A user or integrator can retrieve the history of each conversation thread by calling editor.ai.getThreadLog()
on an editor instance with the AI Assistant plugin enabled.
A new thread is recorded into the thread log with a unique ID each time the AI dialog is opened. When a request returns either a response or an error, an event is recorded in the current thread containing the following fields:
eventUid
-
Unique identifier for the event.
timestamp
-
The time-stamp date at which the event is recorded in the thread, in the ISO-8601 format.
request
-
The
request
object as it was provided to the integration of theai_request
function, excluding the current thread.
and either:
response
-
The
response
object provided by the integration, with atype
field denoting theai_request
callback used (eitherstring
orstream
) anddata
field containing the entire response data; or error
-
A string with any error returned by the integration.
The thread log can contain any number of threads, with any number of events in each thread. The following example only shows a single thread containing a single event. The returned object is provided in the following format:
{
"mce-aithread_123456": [
{
"eventUid": "mce-aithreadevent_654321",
"timestamp": "2023-03-15T09:00:00.000Z",
"request": {
"prompt": "Answer the question based on the context below.\nThe response should be in HTML format.\nThe response should preserve any HTML formatting, links, and styles in the context.\n\nContext: \"\"\"Some selection\"\"\"\n\nQuestion: \"\"\"A user query\"\"\"\n\nAnswer:",
"query": "A user query",
"context": "Some selection",
"system": [
"Answer the question based on the context below.",
"The response should be in HTML format.",
"The response should preserve any HTML formatting, links, and styles in the context."
]
},
"response": {
"type": "string",
"data": "Sorry, there is not enough information to provide an answer to your query,"
}
}
]
}
Once a TinyMCE editor instance is closed, any and all temporarily stored results are lost, so use the getThreadLog() to retrieve and store any responses which should not be lost.
|