Azure AI integration guide
This plugin is only available as a paid add-on to a TinyMCE subscription. |
Introduction
This guide provides instructions for integrating the AI Assistant plugin using the Azure OpenAI Service in TinyMCE.
Azure OpenAI Service provides REST API access to OpenAI’s language models with Azure specific features such as private networking, regional availability, and responsible AI content filtering.
To learn more about the difference between string and streaming responses, see the respondWith
object on the plugin page.
Prerequisites
Before beginning, make sure an OpenAI model has been set up in Azure. The model’s endpoint URL and API key are required for integration. For more information, see the Quick Start Guide provided by Azure.
The following examples are intended to show how to use the authentication credentials with the API within the client side integration. This is not recommended for production purposes. It is recommended to only access the API with a proxy server or by implementing a server-side integration to prevent unauthorized access to the API. |
String response
This example demonstrates how to integrate the AI Assistant plugin with the Azure OpenAI Service Chat completions API to generate a string response.
// This example stores the API key in the client side integration. This is not recommended for any purpose.
// Instead, an alternate method for retrieving the API key should be used.
const AZURE_OPENAI_API_KEY = '<INSERT_API_KEY_HERE>';
const AZURE_OPENAI_ENDPOINT = '<INSERT_ENDPOINT_URL_HERE>'; // e.g. https://<INSERT_RESOURCE_NAME_HERE>.openai.azure.com/openai/deployments/<INSERT_DEPLOYMENT_ID_HERE>/chat/completions?api-version=<INSERT_API_VERSION_HERE>
const ai_request = (request, respondWith) => {
const azureOpenAiOptions = {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'api-key': AZURE_OPENAI_API_KEY
},
body: JSON.stringify({
temperature: 0.7,
max_tokens: 800,
messages: [{ role: 'user', content: request.prompt }],
})
};
respondWith.string((signal) => window.fetch(AZURE_OPENAI_ENDPOINT, { signal, ...azureOpenAiOptions })
.then(async (response) => {
if (response) {
const data = await response.json();
if (data.error) {
throw new Error(`${data.error.type}: ${data.error.message}`);
} else if (response.ok) {
// Extract the response content from the data returned by the API
return data?.choices[0]?.message?.content?.trim();
}
} else {
throw new Error('Failed to communicate with the Azure OpenAI API');
}
})
);
};
tinymce.init({
selector: 'textarea', // change this value according to your HTML
plugins: 'ai',
toolbar: 'aidialog aishortcuts',
ai_request
});
Streaming response
This example demonstrates how to integrate the AI Assistant plugin with the Azure OpenAI Service Chat completions API to generate streaming responses.
const fetchApi = import("https://unpkg.com/@microsoft/fetch-event-source@2.0.1/lib/esm/index.js").then(module => module.fetchEventSource);
// This example stores the API key in the client side integration. This is not recommended for any purpose.
// Instead, an alternate method for retrieving the API key should be used.
const AZURE_OPENAI_API_KEY = '<INSERT_API_KEY_HERE>';
const AZURE_OPENAI_ENDPOINT = '<INSERT_ENDPOINT_URL_HERE>'; // e.g. https://<INSERT_RESOURCE_NAME_HERE>.openai.azure.com/openai/deployments/<INSERT_DEPLOYMENT_ID_HERE>/chat/completions?api-version=<INSERT_API_VERSION_HERE>
const ai_request = (request, respondWith) => {
respondWith.stream((signal, streamMessage) => {
// Adds each previous query and response as individual messages
const conversation = request.thread.flatMap((event) => {
if (event.response) {
return [
{ role: 'user', content: event.request.query },
{ role: 'assistant', content: event.response.data }
];
} else {
return [];
}
});
// System messages provided by the plugin to format the output as HTML content.
const pluginSystemMessages = request.system.map((content) => ({
role: 'system',
content
}));
const systemMessages = [
...pluginSystemMessages,
// Additional system messages to control the output of the AI
{ role: 'system', content: 'Remove lines with ``` from the response start and response end.' }
]
// Forms the new query sent to the API
const content = request.context.length === 0 || conversation.length > 0
? request.query
: `Question: ${request.query} Context: """${request.context}"""`;
const messages = [
...conversation,
...systemMessages,
{ role: 'user', content }
];
const requestBody = {
temperature: 0.7,
max_tokens: 800,
messages,
stream: true
};
const azureOpenAiOptions = {
signal,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'api-key': AZURE_OPENAI_API_KEY
},
body: JSON.stringify(requestBody)
};
const onopen = async (response) => {
if (response) {
const contentType = response.headers.get('content-type');
if (response.ok && contentType?.includes('text/event-stream')) {
return;
} else if (contentType?.includes('application/json')) {
const data = await response.json();
if (data.error) {
throw new Error(`${data.error.type}: ${data.error.message}`);
}
}
} else {
throw new Error('Failed to communicate with the Azure OpenAI API');
}
};
// This function passes each new message into the plugin via the `streamMessage` callback.
const onmessage = (ev) => {
const data = ev.data;
if (data !== '[DONE]') {
const parsedData = JSON.parse(data);
const firstChoice = parsedData?.choices[0];
const message = firstChoice?.delta?.content;
if (message) {
streamMessage(message);
}
}
};
const onerror = (error) => {
// Stop operation and do not retry by the fetch-event-source
throw error;
};
// Use microsoft's fetch-event-source library to work around the 2000 character limit
// of the browser `EventSource` API, which requires query strings
return fetchApi
.then(fetchEventSource =>
fetchEventSource(AZURE_OPENAI_ENDPOINT, {
...azureOpenAiOptions,
openWhenHidden: true,
onopen,
onmessage,
onerror
})
)
.then(async (response) => {
if (response && !response.ok) {
const data = await response.json();
if (data.error) {
throw new Error(`${data.error.type}: ${data.error.message}`);
}
}
})
.catch(onerror);
});
};
tinymce.init({
selector: 'textarea', // change this value according to your HTML
plugins: 'ai',
toolbar: 'aidialog aishortcuts',
ai_request
});