Attendi Speech Service Webcomponents
Attendi Speech web components provide a frontend to the Attendi Speech Service that can easily be integrated into any project that is built on web technologies.
This includes vanilla webpages, but also frameworks like React, Vue, Angular, and Svelte.
Getting started
The webcomponents are available as an npm package or from CDN.
Install locally with npm
To install with npm, run:
npm install @attendi/speech-service
To import the functionality into your project, add
import "@attendi/speech-service";
to the page in which you want to use the components.
Use bundles
The package is also available as pre-built, single-file bundles. These are provided for more flexibility around development workflows: for example, if you would prefer to download a single file rather than use npm
and build tools. The bundles are standard JavaScript modules with no dependencies - any modern browser should be able to import and run the bundles from within a <script type="module">
like this:
<script type="module">
import { attendiMicrophonePlugins } from "https://cdn.jsdelivr.net/npm/@attendi/speech-service@<version>/cdn/speech-service.js";
// use `speech-service.min.js` for the minified version
// you can also use `https://cdn.jsdelivr.net/npm/@attendi/speech-service@<version>`
</script>
If you're using npm for client-side dependencies, you should use the npm package, not these bundles. The bundles intentionally combine the dependencies into a single file, which can cause your page to download more code than it needs.
Components
This package provides multiple webcomponents to facilitate recording and transcribing audio.
Attendi Microphone
After the import, Attendi's microphone component becomes available as a custom HTML tag:
<attendi-microphone></attendi-microphone>
TL;DR:
This component provides a UI for recording, takes care of recording audio and processing it. The component is built with extensibility in mind. It can be extended with plugins that add functionality to the component using the component's plugin APIs. Arbitrary logic can for instance be executed at certain points in the component's lifecycle, such as before recording starts, or when an error occurs.
The AttendiMicrophone
component is a button that can be used to record audio.
Its curresponding HTML tag is <attendi-microphone>
.
Recording is started by clicking the button, and the recording can be stopped by
clicking the button again. The same can be accomplished programmatically by calling
its start
and stop
methods.
The component is built with extensibility in mind. It can be extended with plugins that
add functionality to the component using the component's plugin APIs. Arbitrary logic can
for instance be executed at certain points in the component's lifecycle, such as before
recording starts, or when an error occurs, by registering callbacks with the functions
in its hooks
object, or using the microphone's other methods. See the
AttendiMicrophonePlugin
interface for more information. More information about
the plugin APIs and available plugins can be found below.
Attributes
Other important attributes that can be set on the component are:
-
base-color
: Only accepts HEX string, e.g. "#ff1234". The component will be themed with various shades of this color. If not set, a default color scheme will be used. -
volume-feedback-type
: The component can show a visual feedback of the volume to make it clear to the user that the microphone is working. This option specified how the feedback should be shown. Currently, the only supported values areouter
andinner
.outer
will show a colored area outside the component. It will therefore need some padding on the outside.inner
will fill a percentage of the microphone icon depending on the volume. The default value isinner
. -
outer-volume-feedback-type
: Ifvolume-feedback-type
is set toouter
, this attribute can be used to specify which side of the component should show the feedback. The default value isall
, which will show the feedback on all sides. Other supported values areleft
,right
,top
, andbottom
. -
show-options
: If set totrue
, the component will show the options button. Currently the options include the ability to report using either the SOEP or the SOAP reporting method. If set tofalse
, the component will not show the options button. -
default-plugins
: The component uses a plugin system to extend its functionality. Previously, some plugins were always added. As this constrains the microphone to a specific set of use cases, we intend to remove this behavior. To maintain backwards compatibility in the meantime, it is possible to specify a list of default plugins. By default, the following plugins are added (mostly for backwards compatibility):notification-audio
: Play a sound when the microphone starts, stops, or encounters an error.volume-feedback
: Show volume feedback animation in the microphone when recording.attendi-transcribe
: Transcribe recorded audio using the Attendi Speech Service when stopping the recording.linear-reporting-method
: Add menu options for using linear reporting methods (SOEP
,SOAP
, andTIME
).error
: Show an error dialog and play an error sound when an error occurs. If thesilent
attribute is set totrue
, will not play the error sound.
If no plugins should be added, set the
default-plugins
attribute to an empty list. Example:<attendi-microphone default-plugins="[]"></attendi-microphone>
<!-- or if we do want the `volume-feedback` plugin for example -->
<attendi-microphone
default-plugins='["volume-feedback"]'
></attendi-microphone>
<script>
const mic = document.querySelector("attendi-microphone");
// We're now free to add our own plugins, like our own transcribe plugin.
mic.plugins.add("my-plugin", new MyPlugin());
</script> -
stay-large
: By default, the component will shrink to a smaller size when recording. If set totrue
, the component will stay at the same size when recording. -
no-volume-feedback
(deprecated): If set totrue
, the component will not show any volume feedback. Mostly here for backwards compatibility. The preferred way to disable volume feedback is to set thedefault-plugins
attribute to a list excludingvolume-feedback
. -
silent
(deprecated): By default, the component will play a sound when the recording is started and stopped, and when an error occurs, as we add thenotification-audio
plugin as one of thedefault-plugins
. This can be disabled by setting this attribute totrue
, so that no audio files are loaded or played. The preferred approach is to explicitly set thedefault-plugins
attribute to a list excludingnotification-audio
. It is then possible to supply your own plugin to play audio at the appropriate moments. If content security policies are an issue, you can override the URLs used to load the audio files from. -
transcribe-api-config
(deprecated): See the documentation for theITranscribeAPIConfig
interface for more information on the expected structure. The attribute's value should be a serialized JSON string. Before version 0.14.0, this attribute was used to configure the microphone to connect with Attendi's APIs. This is still necessary ifattendi-transcribe
orlinear-reporting-method
is in the list ofdefault-plugins
(which is the case if you don't setdefault-plugins="[]"
; see attributedefault-plugins
for more information).Instead, it is recommended to always add a transcription plugin manually. For example, if you want to connect with the transcription API from the frontend:
<!-- This is the recommended way to add a transcription plugin. -->
<attendi-microphone default-plugins="[]"></attendi-microphone>
<script type="module">
import { attendiMicrophonePlugins } from "@attendi/speech-service";
// We expose a plugin for transcribing audio with Attendi, but you can create your own.
const { AttendiTranscribePlugin } = attendiMicrophonePlugins;
const mic = document.querySelector("attendi-microphone");
// Note that the plugin config's structure is slightly different from the `transcribe-api-config` attribute's
// structure.
const attendiTranscribePluginConfig = {
customerKey: "customerKey",
userId: "userId",
config: {
model: "DistrictCare", // Usually not necessary to specify.
},
metadata: {
microphoneLocation: "location",
reportId: "id",
},
unitId: "unitId",
};
mic.plugins.add(
"attendi-transcribe",
new AttendiTranscribePlugin(
attendiTranscribePluginConfig,
(transcript) => {
const currentValue = textInput.value;
textInput.value = addTextTo(currentValue, transcript);
}
)
);
mic.plugins.activate();
</script>In the deprecated approach (not recommended anymore), you would set the
transcribe-api-config
attribute to a JSON string or set the property directly:<attendi-microphone
transcribe-api-config="JSON_REPRESENTATION_OF_TranscribeAPIConfig"
></attendi-microphone>
<!-- or -->
<attendi-microphone
transcribe-api-config='{"customerKey": "ck_12345", "userId": "myUser-1", "unitId": "myUnit-1", "modelType": "ResidentialCare"}'
></attendi-microphone>
<!--
When the recording is processed, the component will
emit an event `attendi-speech-segment` containing the transcribed text in `text` and `transcript`.
The previously used event `attendi-speech-service-done` is also emitted, but is deprecated
and will be removed in a future version.
-->or through setting the
transcribeAPIConfig
property directly:const transcribeAPIConfig = {
customerKey: "ck_12345",
userId: "myUser-1",
unitId: "myUnit-1",
modelType: "ResidentialCare",
};
const mic = document.querySelector("attendi-microphone");
// At initialization, set the config.
mic.transcribeAPIConfig = transcribeAPIConfig;
To set a boolean attribute to true
, you can simply set the attribute without a value:
<attendi-microphone silent></attendi-microphone>
Plugins
The component can be extended with plugins that add functionality to the component. The general approach is to register callbacks at specific lifecycle events, such as when recording starts, stops, or when an error occurs. The component will call these callbacks at the appropriate times. Other functionality can be added as well. The plugin APIs are not fully stable yet, so use with caution. The plugin APIs we list below are more stable.
Adding and activating plugins
We recommend setting the default-plugins
attribute to an empty list, and adding all the plugins you need
manually. In time, we will change default-plugins
to be an empty list by default. This also gives you
more control over the plugins that are added to the microphone. The library exposes a attendiMicrophonePlugins
object
that contains all the available plugins. Add these plugins to the microphone as follows:
<attendi-microphone default-plugins="[]"></attendi-microphone>
<script type="module">
import { attendiMicrophonePlugins } from "@attendi/speech-service";
const {
NotificationAudioPlugin,
AttendiTranscribePlugin,
VolumeFeedbackPlugin,
LinearReportingMethodPlugin,
AttendiErrorPlugin,
} = attendiMicrophonePlugins;
const mic = document.querySelector("attendi-microphone");
mic.plugins.add("notification-audio", new NotificationAudioPlugin());
mic.plugins.add(
"attendi-transcribe",
new AttendiTranscribePlugin(attendiTranscribePluginConfig, (transcript) =>
doSomethingWith(transcript)
)
);
mic.plugins.add("volume-feedback", new VolumeFeedbackPlugin());
mic.plugins.add(
"linear-reporting-method",
new LinearReportingMethodPlugin(attendiTranscribePluginConfig)
);
mic.plugins.add("attendi-error", new AttendiErrorPlugin());
mic.plugins.activate();
</script>
Note that the order in which the plugins are added is important, as the added lifecycle hooks will be called in the order
in which the plugins were added. For instance, the NotificationAudioPlugin
and AttendiTranscribePlugin
plugins both
add a hook to the onStopRecording
event. If the NotificationAudioPlugin
is added after the AttendiTranscribePlugin
,
the stop notification sound will only be played after the transcription is done, while we want the sound to play when the
user clicks the button. Therefore we add the NotificationAudioPlugin
first here.
Creating a plugin
Warning: the microphone's plugin APIs are still under development and subject to change.
A plugin is a class that implements the AttendiMicrophonePlugin
interface.
The functionality of any plugin is implemented in its activate
method.
This method is called when the AttendiMicrophone.plugins.activate
method is called, which allows it
to modify the microphone's behavior by calling plugin APIs. Any logic that needs to run when the
microphone is removed from the document's DOM should be implemented in the AttendiMicrophonePlugin.deactivate
method.
This might for instance be necessary when the plugin changes some global state.
As an example, we could create a plugin that logs the length of the recorded audio when recording stops
as follows:
class LogBufferSizePlugin : AttendiMicrophonePlugin {
activate(mic: AttendiMicrophone) {
mic.hooks.onStopRecording((buffer) => {
console.log(`The recorded audio buffer has length ${buffer.length}`);
});
}
}
We could then add this plugin to the microphone as follows:
<attendi-microphone></attendi-microphone>
<script>
const mic = document.querySelector("attendi-microphone");
mic.plugins.add("log-buffer-size", new LogBufferSizePlugin());
mic.plugins.activate();
</script>
Notice from the example that we register the plugin with a name, and that we have to
call mic.plugins.activate()
to activate the added plugin(s).
It is not strictly necessary to use plugins to call the plugin APIs. For instance, we
can accomplish the same thing as in the example above by calling the onStopRecording
hook
directly:
<attendi-microphone></attendi-microphone>
<script>
const mic = document.querySelector("attendi-microphone");
mic.hooks.onStopRecording((buffer) => {
console.log(`The recorded audio buffer has length ${buffer.length}`);
});
</script>
which can be preferable for simple use cases. For more complex use cases, plugins can be a good way to structure the code, and to maintain state in the plugin object.
Hooks
The microphone component contains a hooks
object with multiple functions to add lifecycle hooks.
Call them like so: mic.hooks.onFirstClick(() => console.log("Recording will start soon!"));
.
All plugin API functions return a function that can be called to undo the registration:
const removeOnFirstClick = mic.hooks.onFirstClick(() =>
console.log("Recording will start soon!")
);
removeOnFirstClick(); // The callback will no longer be called.
The following hooks are supported:
onFirstClick
: Called when the microphone is clicked for the first time. Some browsers like Safari require a user interaction before some actions can be performed, like starting a recording or playing a sound.onUIState
: Called when the UI state of the microphone changes. Takes the new state as the first argument. The UI state can be one of the following:NotStartedRecording
: The microphone is not recording and is not processing a recording.LoadingBeforeRecording
: The microphone is not recording, but is processing a recording.Recording
: The microphone is recording.ProcessingRecording
: The microphone is processing a recording.
onBeforeStartRecording
: Called before recording starts.onStartRecording
: Called when recording starts.onBeforeStopRecording
: Called before recording stops. Takes as first argument the recorded audio buffer, and as second argument the parameters that were used to stop the recording (seeAttendiMicrophoneStopParameters
).onStopRecording
: Called when recording stops. Takes as first argument the recorded audio buffer, and as second argument the parameters that were used to stop the recording (seeAttendiMicrophoneStopParameters
). This can be used to do something with the recorded audio, like transcribing it. The given buffer is aFloat32Array
.onError
(unstable): Called when an error occurs.onAttributeChanged
(unstable): Called when an attribute changes. Takes as arguments the name of the attribute, the old value, and the new value.onPropertiesChanged
(unstable): Called when a property changes. Takes as argument a map of the changed properties.
While there are a couple more plugin APIs, these should be considered internal and are not documented here.
Existing plugins
This package exposes a couple of pre-built plugins. Before version 0.14.0, some of these were
included in the component by default. As this constrains the microphone to a specific set of use cases,
we allow changing this behavior by setting the default-plugins
attribute. For backwards compatibility,
some of these plugins are still included by default (indicated with default in the following list).
The following plugins are available:
NotificationAudioPlugin
(default): Play a sound when the microphone starts, stops, or encounters an error.VolumeFeedbackPlugin
(default): Show volume feedback animation in the microphone when recording.AttendiTranscribePlugin
(default): Transcribe recorded audio using the Attendi Speech Service when stopping the recording.LinearReportingMethodPlugin
(default): Add menu options for using linear reporting methods used in medical reporting such asSOEP
,SOAP
, andTIME
.
Communicating with the component (events)
The component emits (custom) events to communicate with your application. Your application can then take actions based on these events if necessary. The following events are emitted:
attendi-microphone-ui-state-updated
: When the UI state of the component changes, this event is emitted. The event contains the new UI state ine.detail
. The possible values areNotStartedRecording
,LoadingBeforeRecording
,Recording
, andProcessingRecording
.attendi-microphone-volume-updated
: Emitted when the volume level of the microphone changes. The event contains the volume level ine.detail.volume
. It also contains the "raw" volume level ine.detail.rawVolume
. The volume level scales and normalizes the raw volume level such that it is in the range [0, 1] and the highs are clearly separated from the lows. You would usually want to use thevolume
value. detail contains a propertyvolume
.attendi-speech-service-error
(unstable API): Emitted when an error is thrown within the component. This allows for error handling outside of the component. The event contains error type ine.detail.error
. For more detailed information on the error, checke.detail.fullError
.
If the default AttendiTranscribePlugin
is used and transcribe-api-config
is set on the component (deprecated),
the component will emit the following events:
attendi-speech-segment
: Emitted when the audio transcript is available. The event contains the transcript data, which can be accessed ase.detail.transcript
.attendi-speech-service-done
(DEPRECATED, useattendi-speech-segment
instead): same purpose asattendi-speech-segment
.
Styling
The component can be further styled using the following CSS variables:
--attendi-microphone-max-width
: Controls the maximum width of the component. The value of this also depends on the component attribute "show-options", since these control whether the component will show an options button.--attendi-border-radius
--attendi-microphone-border-radius
(equivalent to--attendi-border-radius
)--attendi-microphone-border-top-left-radius
--attendi-microphone-border-top-right-radius
--attendi-microphone-border-bottom-left-radius
--attendi-microphone-border-bottom-right-radius
--attendi-microphone-height
--attendi-microphone-padding
--attendi-microphone-padding-top
--attendi-microphone-padding-right
--attendi-microphone-padding-bottom
--attendi-microphone-padding-left
For fine-grained control over the color theming, the microphone exposes a colorTheme
property.
The color theme is an object with the following structure:
interface AttendiMicrophoneColorTheme {
fill: {
default: {
enabled: ColorValueHex;
hover: ColorValueHex;
active: ColorValueHex;
disabled: ColorValueHex;
};
active: {
enabled: ColorValueHex;
hover: ColorValueHex;
active: ColorValueHex;
disabled: ColorValueHex;
};
};
outline: {
default: {
enabled: ColorValueHex;
hover: ColorValueHex;
active: ColorValueHex;
disabled: ColorValueHex;
};
active: {
enabled: ColorValueHex;
hover: ColorValueHex;
active: ColorValueHex;
disabled: ColorValueHex;
};
};
icon: {
default: {
enabled: ColorValueHex;
hover: ColorValueHex;
};
active: {
enabled: ColorValueHex;
hover: ColorValueHex;
};
};
// Only used for outer volume feedback
volumeFeedback: ColorValueHex;
}
Internally, the color theme is generated from a base-color
attribute if it is supplied, or from a default
color theme. If you want to use a custom color theme, you can set the colorTheme
property on the component.
For instance, to change the icon color on hovering the button, you can do:
// Important: color should be a HEX string, e.g. "#ff1234"
const newHoverColor = "#0000ff";
const attendiMicrophone = document.querySelector("attendi-microphone");
const newColorTheme = { ...attendiMicrophone.colorTheme };
newColorTheme.icon.default.hover = newHoverColor;
attendiMicrophone.colorTheme = newColorTheme;
To change the microphone icon, use the microphone-icon
slot. Example:
<attendi-microphone ...>
<svg slot="microphone-icon">
<defs>
<linearGradient
<!-- this id MUST be unique on the rendered page. If you are rendering using
some template, we suggest creating some function that will return the svg template
with a random id for this linearGradient. -->
id="uniqueId"
x1="0%"
y1="100%"
x2="0%"
y2="0%"
>
<!-- The offset percentage controls what percentage of the microphone cone is filled.
100% Means the microphone is fully filled in. Don't change the 'id'! The volume animation
will stop working. -->
<stop
id="microphoneFill"
offset="0%"
stop-color="currentColor"
/>
<stop id="transparentStop" offset="0%" stop-opacity="0" />
</linearGradient>
</defs>
<!-- cone -->
<path
d="..."
stroke="currentColor"
<!-- this must be the same as the id of the `linearGradient` tag. -->
fill="url(#uniqueId)"
/>
<!-- base -->
<path
id="base"
fill-rule="evenodd"
clip-rule="evenodd"
d="..."
fill="currentColor"
/>
</svg>
</attendi-microphone>
An SVG icon is recommended. By default, the Attendi Microphone shows the volume level by filling the microphone icon accordingly. It normally does this by finding a linear gradient def in the SVG icon and setting its offset to the volume level. Therefore it's important to have your SVG icon follow the same structure:
- Add a linear gradient as in the example above. Make sure the
stop
tag has anid
ofmicrophoneFill
. - Make sure the
path
tag that represents the microphone cone has afill
attribute that references the id of thelinearGradient
tag.
Configuration specification
To communicate with the Attendi transcription APIs, use the AttendiTranscribePlugin
plugin.
On initialization, it expects a configuration object with the following structure:
const attendiTranscribePluginConfig = {
apiURL: <apiURL>, // Specify the URL of the Attendi Speech Service API. Defaults to `https://api.attendi.nl`. If you want to use the sandbox environment, set this to `https://sandbox.api.attendi.nl`.
customerKey: <customerKey>, // Your customer API key, used to authenticate with Attendi APIs.
userId: <userId>, // Unique id assigned (by you) to your user.
config: {
model: modelType, // Which model to use, e.g. ModelType.ResidentialCare or "ResidentialCare"
},
metadata: {
reportId: <reportId>, // Unique id assigned (by you) to the report.
},
unitId: <unitId>, // Unique id assigned (by you) to the team or location of your user.
};
const attendiTranscribePlugin = new AttendiTranscribePlugin(
attendiTranscribePluginConfig,
(transcript) => {
// Do something with the transcript.
console.log(transcript);
}
);
Configuration specification (deprecated)
This section is here for backwards compatibility. It is only applicable if you are not changing the
default-plugins
attribute to exclude the attendi-transcribe
plugin and are setting the transcribe-api-config
attribute directly on the component.
All communication with the Attendi Speech Service backend (e.g. sending audio, receiving transcript) is handled by the component. The component expects to receive an object consistent with the ITranscribeAPIConfig
interface:
interface ITranscribeAPIConfig {
customerKey: string; // Your customer API key, used to authenticate to Attendi APIs.
userId: string; // Unique id assigned (by you) to your user
unitId: string; // Unique id assigned (by you) to the team or location of your user
userAgent?: string; // User agent string identifying the user device, OS and browser. Defaults to navigator.userAgent
modelType?: ModelType; // Which model to use, e.g. ModelType.ResidentialCare or "ResidentialCare"
apiURL?: string; // URL of the Attendi Speech Service API. Defaults to `https://api.attendi.nl`. If you want to use the sandbox environment, set this to `https://sandbox.api.attendi.nl`.
}
Here, ModelType
is given by:
enum ModelType {
DistrictCare,
ResidentialCare,
}
or can be specified as the equivalent string, e.g. "ResidentialCare"
.
There are two ways to supply the configuration object to the component:
- Using the
transcribe-api-config
attribute on the html component, and - Manipulating the tag's
transcribeAPIConfig
property in Javascript.
You are free to choose the one that fits your needs best.
Configuration using the transcribe-api-config
attribute
It is possible to supply the component with a JSON representation of the transcribe config like so:
<attendi-microphone
transcribe-api-config="JSON_REPRESENTATION_OF_TranscribeAPIConfig"
></attendi-microphone>
More specifically:
<attendi-microphone
transcribe-api-config='{"customerKey": "ck_12345", "userId": "myUser-1", "unitId": "myUnit-1", "modelType": "ResidentialCare"}'
></attendi-microphone>
This method is to be preferred when you are able to render HTML from a template. In React for example, you could do the following:
import { ModelType } from "@attendi/speech-service";
function App() {
const transcribeAPIConfig = {
customerKey: "ck_12345",
userId: "myUser-1",
unitId: "myUnit-1",
modelType: ModelType.ResidentialCare,
};
const transcribeAPIConfigJSON = JSON.stringify(transcribeAPIConfig);
return (
<div className="App">
<attendi-speech-service
transcribe-api-config={transcribeAPIConfigJSON}
></attendi-speech-service>
</div>
);
}
Configuration by manipulating the tag's transcribeAPIConfig
property in Javascript
If you're not able to render HTML tags dynamically, for example when using vanilla HTML and Javascript, set the tag's transcribeAPIConfig
property yourself. For example:
const attendiMicrophone = document.querySelector("attendi-microphone");
// at initialization, set the config
attendiMicrophone.transcribeAPIConfig = transcribeAPIConfig;
Attendi Volume Over Time component
The AttendiVolumeOverTime
component visualizes the volume of an audio signal over time.
Its curresponding HTML tag is <attendi-volume-over-time>
.
It displays a series of vertical bars whose heights correspond to the volume levels that
have been added to it using its addVolume
method. These bars are updated at a rate that
is determined by update-period-milliseconds
, creating a scrolling effect to the right.
Any volume value added within a single update period is binned and averaged over that period.
Any volume value outside of the range [0, 1] will raise an error.
The component can be started and stopped using the start
and stop
methods respectively. After
calling start
, volume values need to be added using the addVolume
method.
The reset
method can be used to reset the volume bars after use.
Other important attributes that can be set on the component are:
update-period-milliseconds
: The time it takes before the volume bars are updated. The default value is 75ms.volume-bar-thickness-multiplier
: Allows controlling the thickness of the volume bars. Multiplies the default thickness of the volume bars. The default value is 1.running
: Whether the volume bars are being updated or not. If true, the volume bars will update at the frequency specified by theupdatePeriodMilliseconds
. No updates will be rendered. Playback can be controlled by setting this attribute to true or false, or with this component'sstart
andstop
methods.
Styling attendi-volume-over-time
The component can be further styled with the following CSS variables:
--attendi-volume-over-time-background-color
: The background color of the component. Transparent by default.--attendi-volume-over-time-border-radius
: The border radius of the component. 0px by default.--attendi-volume-over-time-volume-bar-border-radius
: The border radius of the volume bars. 2px by default.--attendi-volume-over-time-volume-bar-color
: The color of the volume bars.#1c69e8
by default.
Events
The component emits the following events:
attendi-volume-over-time-updated
: Emitted when the volume bars are updated. The event detail contains the volume bins (an array of numbers).
Example
<attendi-volume-over-time
update-period-milliseconds="75"
volume-bar-thickness-multiplier="1"
></attendi-volume-over-time>
<script>
const attendiVolumeOverTime = document.querySelector(
"attendi-volume-over-time"
);
// When you want to start recording.
attendiVolumeOverTime.start();
attendiVolumeOverTime.addVolume(0.5);
// When you want to stop recording.
attendiVolumeOverTime.stop();
// Most likely, you will want to reset the volume bars after use.
attendiVolumeOverTime.reset();
</script>
<style>
attendi-volume-over-time {
--attendi-volume-over-time-volume-bar-color: #ff52e8;
--attendi-volume-over-time-background-color: #f0f0f0;
--attendi-volume-over-time-border-radius: 5px;
--attendi-volume-over-time-volume-bar-border-radius: 20px;
}
</style>
Integration with the Attendi Microphone
The AttendiVolumeOverTime
component can be used together with the AttendiMicrophone
component to provide visual feedback of the volume level while recording. The AttendiMicrophone
component emits events with volume data that can be used to update the AttendiVolumeOverTime
component.
<attendi-microphone></attendi-microphone>
<attendi-volume-over-time></attendi-volume-over-time>
<script>
const attendiMicrophone = document.querySelector("attendi-microphone");
const attendiVolumeOverTime = document.querySelector(
"attendi-volume-over-time"
);
// The microphone emits volume data with the `attendi-microphone-volume-updated` event.
attendiMicrophone.addEventListener("attendi-microphone-volume-updated", () =>
attendiVolumeOverTime.addVolume(event.detail.volume)
);
// The microphone emits a `attendi-microphone-ui-state-updated` event when its ui state changes.
// This can be used as a trigger to start or stop the volume over time visualizer.
attendiMicrophone.addEventListener(
"attendi-microphone-ui-state-updated",
(event) => {
const uiState = event.detail;
if (uiState === "Recording") {
// Start the volume over time visualizer so it can accept new volume data
// and animate it.
attendiVolumeOverTime.start();
} else if (uiState === "NotStartedRecording") {
// The `stop` method will cause the component to stop updating and accepting new volume data.
attendiVolumeOverTime.stop();
// The `reset` method will clear any volume data that has been added to the component.
attendiVolumeOverTime.reset();
}
}
);
</script>
Attendi Speech Service component (will be deprecated)
This component is designed to be used within a modal. When it is opened, it will show text editor and audio recording controls. As this is quite a complex interaction, it is recommended to use the Attendi Microphone component instead. This component is planned to be deprecated in the future. Please use the Attendi Microphone component instead.
The recorder component and modal component are available as custom HTML tags:
<attendi-speech-service-modal>
<attendi-speech-service></attendi-speech-service>
</attendi-speech-service-modal>
The modal component is designed specifically for use with the Attendi Speech Service component. Its size and position adapts to the viewport size.
Communicating with the speech service component
The component emits events to communicate with your application. Your application is responsible for taking actions based on these events. The following events are emitted:
attendi-speech-segment
: Emitted when the audio transcript is available. The event contains the transcript data, which can be accessed ase.detail.transcript
.attendi-speech-service-done
(DEPRECATED, useattendi-speech-segment
instead): same purpose asattendi-speech-segment
.attendi-speech-service-cancel
: Emitted when the user clicks the "exit" button, which is the X button on the topright corner of the component. This event should be handled by closing the view (e.g. modal) that contains the component.attendi-speech-service-error
: Emitted when an error is thrown within the component. This allows for error handling outside of the component. The event contains error type ine.detail.error
. For more detailed information on the error, checke.detail.fullError
.
Example:
let attendi_speech_service_element = document.querySelector(
"attendi-speech-service"
);
let transcript = "";
attendi_speech_service_element.addEventListener(
"attendi-speech-segment",
(e) => {
console.log(e);
transcript = e.detail.transcript;
}
);
// This event is fired when the "cancel" button is clicked.
attendi_speech_service.addEventListener(
"attendi-speech-service-cancel",
(_) => {
closeModal(modal);
attendi_speech_service_container.removeChild(attendi_speech_service);
}
);
// This event is fired when an error is thrown during the transcription API call.
// This allows you to handle these errors yourself.
attendi_speech_service.addEventListener("attendi-speech-service-error", (e) => {
// the name of the error can be accessed like so
console.log(e.detail.error); // e.g. "AttendiAuthenticationError"
// a more detailed error object can be accessed like so
console.log(e.detail.fullError);
});
Styling the component
The component's colors and fonts can be customized to match your own theme. Use the following CSS variables:
attendi-speech-service {
/* Color for main buttons */
--attendi-speech-service-button-color-primary: YOUR_COLOR;
/* Color for secondary buttons */
--attendi-speech-service-button-color-secondary: YOUR_COLOR;
--attendi-speech-service-button-text-color: YOUR_COLOR;
}
Troubleshooting
The Attendi Modal is obscured by other elements on the page
It may occur that the modal component is obscured by other elements of your application. This happens due to z-index
issues.
It is probably the case that either the z-index
on the modal is too low (though it has a sensible default value), or there is a "nested z-index
" issue.
To mitigate this, either set a higher z-index
on the modal HTML tag, or make sure the tag is not nested inside another tag. A simple way too ensure this is to place the tag just before the end of your document body.
Slow to start recording on Safari
On some devices using the Safari browser, it can take quite a long time before the recording actually starts. After some investigation, we found that on a MacBook Pro, it could take up to 3 (!) seconds for the recording to start, whereas it would only take about 200 ms on Chrome. It seems to be a bit better on iOS
devices. A couple of things contribute to this bottleneck:
- The web API
navigator.getUserMedia
is used to allow the browser to access the device's microphone. Safari's implementation seems to be especially slow. audioWorkletNode.addModule
is used to load a custom audio processing script. Here again, Safari's implementation is very slow.
These are web APIs that are out of our control, so there is not much we can currently do about it.
Sometimes the UI seems to freeze when starting the recording. This is because the getUserMedia
somehow uses a lot of CPU in Safari, blocking the UI thread. It is not possible to move the call of the main thread, as getUserMedia
is not available to web workers.