How can I overwrite part of web audio with new recorded audio?
I am successfully recording audio, storing it, and reloading it into Audio
object for playback in the browser.
However, I would like to be able to record new audio and "splice" it into the original recording at a certain time offset, completely replacing that portion of the original recording onward from that offset. For instance, suppose I recorded 10 seconds of audio from the microphone, and subsequently, I wished to "record over" the last 5 seconds of that audio with 8 seconds of totally new audio, ending up with a new bufferArray that I could persist. I've spent some hours researching this and still am pretty vague on how to do it. If anybody has any suggestions, I'd be appreciative.
The closest examples I could find involved getting two buffers and attempting to concatenate them, as in this fiddle. However, I'm having some difficulties relating this fiddle to my code and what I need to do. Certain posts mention I need to know the sample rate, create new buffers with that same sample rate, and copy data from two buffers into the new buffer. But there are precious few working examples on this technique, and I'm heavily scratching my head trying to figure this out from just the MDN docs.
Here's the code I have working now.
const saveRecordedAudio = (e) => {
console.log("Audio data available", e.data);
const reader = new FileReader();
reader.addEventListener("loadend", function() {
// reader.result contains the contents of blob as a typed array
let bufferArray = reader.result;
// From: https://stackoverflow.com/questions/9267899/arraybuffer-to-base64-encoded-string
let base64String = btoa(.reduce.call(new Uint8Array(bufferArray),function(p,c){return p+String.fromCharCode(c)},''));
// persist base64-encoded audio data on filesystem here.
storeRecordedAudio(base64String); // defined elsewhere and not shown here for brevity's sake
});
reader.readAsArrayBuffer(e.data);
const audioUrl = window.URL.createObjectURL(e.data);
// Put the recorded audio data into the browser for playback.
// From: http://stackoverflow.com/questions/33755524/how-to-load-audio-completely-before-playing (first answer)
const audioObj = new Audio (audioUrl);
audioObj.load();
};
const recordAudio = () => {
navigator.getUserMedia = ( navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
if (navigator.getUserMedia) {
navigator.getUserMedia (
{ // constraints - only audio needed for this app
audio: true
},
// Success callback
function(stream) {
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = audio.saveRecordedAudio;
}),
// fail callback
function(err) {
console.log('Could not record.');
}
);
}
};
// I've already fetched a previously recorded audio buffer encoded as
// a base64 string, so place it into an Audio object for playback
const setRecordedAudio => (b64string) => {
const labeledAudio = 'data:video/webm;base64,' + b64String;
const audioObj = new Audio(labeledAudio);
audioObj.load();
};
javascript web audio html5-audio web-audio-api
|
show 6 more comments
I am successfully recording audio, storing it, and reloading it into Audio
object for playback in the browser.
However, I would like to be able to record new audio and "splice" it into the original recording at a certain time offset, completely replacing that portion of the original recording onward from that offset. For instance, suppose I recorded 10 seconds of audio from the microphone, and subsequently, I wished to "record over" the last 5 seconds of that audio with 8 seconds of totally new audio, ending up with a new bufferArray that I could persist. I've spent some hours researching this and still am pretty vague on how to do it. If anybody has any suggestions, I'd be appreciative.
The closest examples I could find involved getting two buffers and attempting to concatenate them, as in this fiddle. However, I'm having some difficulties relating this fiddle to my code and what I need to do. Certain posts mention I need to know the sample rate, create new buffers with that same sample rate, and copy data from two buffers into the new buffer. But there are precious few working examples on this technique, and I'm heavily scratching my head trying to figure this out from just the MDN docs.
Here's the code I have working now.
const saveRecordedAudio = (e) => {
console.log("Audio data available", e.data);
const reader = new FileReader();
reader.addEventListener("loadend", function() {
// reader.result contains the contents of blob as a typed array
let bufferArray = reader.result;
// From: https://stackoverflow.com/questions/9267899/arraybuffer-to-base64-encoded-string
let base64String = btoa(.reduce.call(new Uint8Array(bufferArray),function(p,c){return p+String.fromCharCode(c)},''));
// persist base64-encoded audio data on filesystem here.
storeRecordedAudio(base64String); // defined elsewhere and not shown here for brevity's sake
});
reader.readAsArrayBuffer(e.data);
const audioUrl = window.URL.createObjectURL(e.data);
// Put the recorded audio data into the browser for playback.
// From: http://stackoverflow.com/questions/33755524/how-to-load-audio-completely-before-playing (first answer)
const audioObj = new Audio (audioUrl);
audioObj.load();
};
const recordAudio = () => {
navigator.getUserMedia = ( navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
if (navigator.getUserMedia) {
navigator.getUserMedia (
{ // constraints - only audio needed for this app
audio: true
},
// Success callback
function(stream) {
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = audio.saveRecordedAudio;
}),
// fail callback
function(err) {
console.log('Could not record.');
}
);
}
};
// I've already fetched a previously recorded audio buffer encoded as
// a base64 string, so place it into an Audio object for playback
const setRecordedAudio => (b64string) => {
const labeledAudio = 'data:video/webm;base64,' + b64String;
const audioObj = new Audio(labeledAudio);
audioObj.load();
};
javascript web audio html5-audio web-audio-api
just saving the bufferArrays passed to a number of Blob constructors may help. If you have the orig. corresponding arrays , you may add, subtract them from an aggregating bufferArray and pass that last BA to a new blob constructor, thereby realizing the reqmt for mix & match of audio clips.
– Robert Rowntree
Nov 24 '18 at 15:55
github.com/higuma/mp3-lame-encoder-js/blob/master/src/… - mp3 example of where you might adjust the raw , arrayBuffer in order to mix, aggregate respective clips and their orig. arrayBuffrs.
– Robert Rowntree
Nov 24 '18 at 16:04
1
npmjs.com/package/blob-to-buffer to get back to arrayBuff from a blob.
– Robert Rowntree
Nov 24 '18 at 19:46
Thank you for the tips. This may help me indeed. However, I'm still not sure if I can just copy an arbitrary part of the first bufferArray into a new array, then aggregate with another bufferArray (using Blob), since I don't know how to relate number of bytes to copy, to milliseconds of recorded sound. Ie, suppose I need 5.27s of the first clip to be combined with the entirety of the second clip. How would I calculate how many bytes to copy out of the first bufferArray?
– Will Kessler
Nov 25 '18 at 5:47
1
i would also look at ffmpeg on node back end to manage the timestamps for you. you would only do CLI or API calls and ffmpeg does all the timestamp stuff 4 u. downside is all the clips have to be on the cloud/back end for CLI or API to work.
– Robert Rowntree
Nov 25 '18 at 17:28
|
show 6 more comments
I am successfully recording audio, storing it, and reloading it into Audio
object for playback in the browser.
However, I would like to be able to record new audio and "splice" it into the original recording at a certain time offset, completely replacing that portion of the original recording onward from that offset. For instance, suppose I recorded 10 seconds of audio from the microphone, and subsequently, I wished to "record over" the last 5 seconds of that audio with 8 seconds of totally new audio, ending up with a new bufferArray that I could persist. I've spent some hours researching this and still am pretty vague on how to do it. If anybody has any suggestions, I'd be appreciative.
The closest examples I could find involved getting two buffers and attempting to concatenate them, as in this fiddle. However, I'm having some difficulties relating this fiddle to my code and what I need to do. Certain posts mention I need to know the sample rate, create new buffers with that same sample rate, and copy data from two buffers into the new buffer. But there are precious few working examples on this technique, and I'm heavily scratching my head trying to figure this out from just the MDN docs.
Here's the code I have working now.
const saveRecordedAudio = (e) => {
console.log("Audio data available", e.data);
const reader = new FileReader();
reader.addEventListener("loadend", function() {
// reader.result contains the contents of blob as a typed array
let bufferArray = reader.result;
// From: https://stackoverflow.com/questions/9267899/arraybuffer-to-base64-encoded-string
let base64String = btoa(.reduce.call(new Uint8Array(bufferArray),function(p,c){return p+String.fromCharCode(c)},''));
// persist base64-encoded audio data on filesystem here.
storeRecordedAudio(base64String); // defined elsewhere and not shown here for brevity's sake
});
reader.readAsArrayBuffer(e.data);
const audioUrl = window.URL.createObjectURL(e.data);
// Put the recorded audio data into the browser for playback.
// From: http://stackoverflow.com/questions/33755524/how-to-load-audio-completely-before-playing (first answer)
const audioObj = new Audio (audioUrl);
audioObj.load();
};
const recordAudio = () => {
navigator.getUserMedia = ( navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
if (navigator.getUserMedia) {
navigator.getUserMedia (
{ // constraints - only audio needed for this app
audio: true
},
// Success callback
function(stream) {
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = audio.saveRecordedAudio;
}),
// fail callback
function(err) {
console.log('Could not record.');
}
);
}
};
// I've already fetched a previously recorded audio buffer encoded as
// a base64 string, so place it into an Audio object for playback
const setRecordedAudio => (b64string) => {
const labeledAudio = 'data:video/webm;base64,' + b64String;
const audioObj = new Audio(labeledAudio);
audioObj.load();
};
javascript web audio html5-audio web-audio-api
I am successfully recording audio, storing it, and reloading it into Audio
object for playback in the browser.
However, I would like to be able to record new audio and "splice" it into the original recording at a certain time offset, completely replacing that portion of the original recording onward from that offset. For instance, suppose I recorded 10 seconds of audio from the microphone, and subsequently, I wished to "record over" the last 5 seconds of that audio with 8 seconds of totally new audio, ending up with a new bufferArray that I could persist. I've spent some hours researching this and still am pretty vague on how to do it. If anybody has any suggestions, I'd be appreciative.
The closest examples I could find involved getting two buffers and attempting to concatenate them, as in this fiddle. However, I'm having some difficulties relating this fiddle to my code and what I need to do. Certain posts mention I need to know the sample rate, create new buffers with that same sample rate, and copy data from two buffers into the new buffer. But there are precious few working examples on this technique, and I'm heavily scratching my head trying to figure this out from just the MDN docs.
Here's the code I have working now.
const saveRecordedAudio = (e) => {
console.log("Audio data available", e.data);
const reader = new FileReader();
reader.addEventListener("loadend", function() {
// reader.result contains the contents of blob as a typed array
let bufferArray = reader.result;
// From: https://stackoverflow.com/questions/9267899/arraybuffer-to-base64-encoded-string
let base64String = btoa(.reduce.call(new Uint8Array(bufferArray),function(p,c){return p+String.fromCharCode(c)},''));
// persist base64-encoded audio data on filesystem here.
storeRecordedAudio(base64String); // defined elsewhere and not shown here for brevity's sake
});
reader.readAsArrayBuffer(e.data);
const audioUrl = window.URL.createObjectURL(e.data);
// Put the recorded audio data into the browser for playback.
// From: http://stackoverflow.com/questions/33755524/how-to-load-audio-completely-before-playing (first answer)
const audioObj = new Audio (audioUrl);
audioObj.load();
};
const recordAudio = () => {
navigator.getUserMedia = ( navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
if (navigator.getUserMedia) {
navigator.getUserMedia (
{ // constraints - only audio needed for this app
audio: true
},
// Success callback
function(stream) {
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = audio.saveRecordedAudio;
}),
// fail callback
function(err) {
console.log('Could not record.');
}
);
}
};
// I've already fetched a previously recorded audio buffer encoded as
// a base64 string, so place it into an Audio object for playback
const setRecordedAudio => (b64string) => {
const labeledAudio = 'data:video/webm;base64,' + b64String;
const audioObj = new Audio(labeledAudio);
audioObj.load();
};
javascript web audio html5-audio web-audio-api
javascript web audio html5-audio web-audio-api
edited Nov 25 '18 at 2:19
dhilt
8,00742142
8,00742142
asked Nov 24 '18 at 4:25
Will KesslerWill Kessler
187314
187314
just saving the bufferArrays passed to a number of Blob constructors may help. If you have the orig. corresponding arrays , you may add, subtract them from an aggregating bufferArray and pass that last BA to a new blob constructor, thereby realizing the reqmt for mix & match of audio clips.
– Robert Rowntree
Nov 24 '18 at 15:55
github.com/higuma/mp3-lame-encoder-js/blob/master/src/… - mp3 example of where you might adjust the raw , arrayBuffer in order to mix, aggregate respective clips and their orig. arrayBuffrs.
– Robert Rowntree
Nov 24 '18 at 16:04
1
npmjs.com/package/blob-to-buffer to get back to arrayBuff from a blob.
– Robert Rowntree
Nov 24 '18 at 19:46
Thank you for the tips. This may help me indeed. However, I'm still not sure if I can just copy an arbitrary part of the first bufferArray into a new array, then aggregate with another bufferArray (using Blob), since I don't know how to relate number of bytes to copy, to milliseconds of recorded sound. Ie, suppose I need 5.27s of the first clip to be combined with the entirety of the second clip. How would I calculate how many bytes to copy out of the first bufferArray?
– Will Kessler
Nov 25 '18 at 5:47
1
i would also look at ffmpeg on node back end to manage the timestamps for you. you would only do CLI or API calls and ffmpeg does all the timestamp stuff 4 u. downside is all the clips have to be on the cloud/back end for CLI or API to work.
– Robert Rowntree
Nov 25 '18 at 17:28
|
show 6 more comments
just saving the bufferArrays passed to a number of Blob constructors may help. If you have the orig. corresponding arrays , you may add, subtract them from an aggregating bufferArray and pass that last BA to a new blob constructor, thereby realizing the reqmt for mix & match of audio clips.
– Robert Rowntree
Nov 24 '18 at 15:55
github.com/higuma/mp3-lame-encoder-js/blob/master/src/… - mp3 example of where you might adjust the raw , arrayBuffer in order to mix, aggregate respective clips and their orig. arrayBuffrs.
– Robert Rowntree
Nov 24 '18 at 16:04
1
npmjs.com/package/blob-to-buffer to get back to arrayBuff from a blob.
– Robert Rowntree
Nov 24 '18 at 19:46
Thank you for the tips. This may help me indeed. However, I'm still not sure if I can just copy an arbitrary part of the first bufferArray into a new array, then aggregate with another bufferArray (using Blob), since I don't know how to relate number of bytes to copy, to milliseconds of recorded sound. Ie, suppose I need 5.27s of the first clip to be combined with the entirety of the second clip. How would I calculate how many bytes to copy out of the first bufferArray?
– Will Kessler
Nov 25 '18 at 5:47
1
i would also look at ffmpeg on node back end to manage the timestamps for you. you would only do CLI or API calls and ffmpeg does all the timestamp stuff 4 u. downside is all the clips have to be on the cloud/back end for CLI or API to work.
– Robert Rowntree
Nov 25 '18 at 17:28
just saving the bufferArrays passed to a number of Blob constructors may help. If you have the orig. corresponding arrays , you may add, subtract them from an aggregating bufferArray and pass that last BA to a new blob constructor, thereby realizing the reqmt for mix & match of audio clips.
– Robert Rowntree
Nov 24 '18 at 15:55
just saving the bufferArrays passed to a number of Blob constructors may help. If you have the orig. corresponding arrays , you may add, subtract them from an aggregating bufferArray and pass that last BA to a new blob constructor, thereby realizing the reqmt for mix & match of audio clips.
– Robert Rowntree
Nov 24 '18 at 15:55
github.com/higuma/mp3-lame-encoder-js/blob/master/src/… - mp3 example of where you might adjust the raw , arrayBuffer in order to mix, aggregate respective clips and their orig. arrayBuffrs.
– Robert Rowntree
Nov 24 '18 at 16:04
github.com/higuma/mp3-lame-encoder-js/blob/master/src/… - mp3 example of where you might adjust the raw , arrayBuffer in order to mix, aggregate respective clips and their orig. arrayBuffrs.
– Robert Rowntree
Nov 24 '18 at 16:04
1
1
npmjs.com/package/blob-to-buffer to get back to arrayBuff from a blob.
– Robert Rowntree
Nov 24 '18 at 19:46
npmjs.com/package/blob-to-buffer to get back to arrayBuff from a blob.
– Robert Rowntree
Nov 24 '18 at 19:46
Thank you for the tips. This may help me indeed. However, I'm still not sure if I can just copy an arbitrary part of the first bufferArray into a new array, then aggregate with another bufferArray (using Blob), since I don't know how to relate number of bytes to copy, to milliseconds of recorded sound. Ie, suppose I need 5.27s of the first clip to be combined with the entirety of the second clip. How would I calculate how many bytes to copy out of the first bufferArray?
– Will Kessler
Nov 25 '18 at 5:47
Thank you for the tips. This may help me indeed. However, I'm still not sure if I can just copy an arbitrary part of the first bufferArray into a new array, then aggregate with another bufferArray (using Blob), since I don't know how to relate number of bytes to copy, to milliseconds of recorded sound. Ie, suppose I need 5.27s of the first clip to be combined with the entirety of the second clip. How would I calculate how many bytes to copy out of the first bufferArray?
– Will Kessler
Nov 25 '18 at 5:47
1
1
i would also look at ffmpeg on node back end to manage the timestamps for you. you would only do CLI or API calls and ffmpeg does all the timestamp stuff 4 u. downside is all the clips have to be on the cloud/back end for CLI or API to work.
– Robert Rowntree
Nov 25 '18 at 17:28
i would also look at ffmpeg on node back end to manage the timestamps for you. you would only do CLI or API calls and ffmpeg does all the timestamp stuff 4 u. downside is all the clips have to be on the cloud/back end for CLI or API to work.
– Robert Rowntree
Nov 25 '18 at 17:28
|
show 6 more comments
1 Answer
1
active
oldest
votes
If I get what you're trying to do correctely, here is my way to deal with this problem:
1- We should declare a blob array and store data in it:
var chuncks = ;
2- We need to know the number of seconds recorded :
We should use the start(1000) method with the timeslice parameter in order to limit the number of milliseconds in each blob,
in this case, we know that every chunk of data is approximately 1 second long ( chunks.length = number of seconds recorded).And to fill in the chunks array with data, ondataavailable event handler
is our friend :
mediaRecorder.ondataavailable = function( event ) {
chunks.push( event.data );
}
Now, we have to deal with simple Arrays instead of Buffer Arrays :-D (In your example, we should delete or override the last 5 items and keep adding blobs to it)
3- And finally, when our recording is ready (chunks array):
we should join all together using the Blob constructor:
new Blob(chunks, {'type' : 'audio/webm;codecs=opus'});
I hope this helps :-D
Thank you, this is a useful tip. I will study the start() method. But as I commented above, what I really need is to be able to preserve an arbitrary number of milliseconds of the first clip and combine it with the entirety of the second clip. With your suggestion, wouldn't I be limited to a resolution of 1s? I mean, I suppose the chunks could be 100ms long which may be small enough to not cause dropout... but it could get unwieldy to keep all the chunks around forever just in case the user wants to add another clip.
– Will Kessler
Nov 25 '18 at 5:49
The above start(1000) is just an example, you could use 100 instead of 1000 if that works for you.
– CryptoBird
Nov 26 '18 at 14:41
As far as I know, this is the only easy way to do that, and please, keep in mind that, It's not easy to crop out an exact specific part of an audio file ( 5.27s as you mentioned in your comment above), cause, it contains multiple sound signals.
– CryptoBird
Nov 26 '18 at 14:44
Please, let me know if there is any other easy way to do that. Thanks.
– CryptoBird
Nov 26 '18 at 14:45
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53455124%2fhow-can-i-overwrite-part-of-web-audio-with-new-recorded-audio%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
If I get what you're trying to do correctely, here is my way to deal with this problem:
1- We should declare a blob array and store data in it:
var chuncks = ;
2- We need to know the number of seconds recorded :
We should use the start(1000) method with the timeslice parameter in order to limit the number of milliseconds in each blob,
in this case, we know that every chunk of data is approximately 1 second long ( chunks.length = number of seconds recorded).And to fill in the chunks array with data, ondataavailable event handler
is our friend :
mediaRecorder.ondataavailable = function( event ) {
chunks.push( event.data );
}
Now, we have to deal with simple Arrays instead of Buffer Arrays :-D (In your example, we should delete or override the last 5 items and keep adding blobs to it)
3- And finally, when our recording is ready (chunks array):
we should join all together using the Blob constructor:
new Blob(chunks, {'type' : 'audio/webm;codecs=opus'});
I hope this helps :-D
Thank you, this is a useful tip. I will study the start() method. But as I commented above, what I really need is to be able to preserve an arbitrary number of milliseconds of the first clip and combine it with the entirety of the second clip. With your suggestion, wouldn't I be limited to a resolution of 1s? I mean, I suppose the chunks could be 100ms long which may be small enough to not cause dropout... but it could get unwieldy to keep all the chunks around forever just in case the user wants to add another clip.
– Will Kessler
Nov 25 '18 at 5:49
The above start(1000) is just an example, you could use 100 instead of 1000 if that works for you.
– CryptoBird
Nov 26 '18 at 14:41
As far as I know, this is the only easy way to do that, and please, keep in mind that, It's not easy to crop out an exact specific part of an audio file ( 5.27s as you mentioned in your comment above), cause, it contains multiple sound signals.
– CryptoBird
Nov 26 '18 at 14:44
Please, let me know if there is any other easy way to do that. Thanks.
– CryptoBird
Nov 26 '18 at 14:45
add a comment |
If I get what you're trying to do correctely, here is my way to deal with this problem:
1- We should declare a blob array and store data in it:
var chuncks = ;
2- We need to know the number of seconds recorded :
We should use the start(1000) method with the timeslice parameter in order to limit the number of milliseconds in each blob,
in this case, we know that every chunk of data is approximately 1 second long ( chunks.length = number of seconds recorded).And to fill in the chunks array with data, ondataavailable event handler
is our friend :
mediaRecorder.ondataavailable = function( event ) {
chunks.push( event.data );
}
Now, we have to deal with simple Arrays instead of Buffer Arrays :-D (In your example, we should delete or override the last 5 items and keep adding blobs to it)
3- And finally, when our recording is ready (chunks array):
we should join all together using the Blob constructor:
new Blob(chunks, {'type' : 'audio/webm;codecs=opus'});
I hope this helps :-D
Thank you, this is a useful tip. I will study the start() method. But as I commented above, what I really need is to be able to preserve an arbitrary number of milliseconds of the first clip and combine it with the entirety of the second clip. With your suggestion, wouldn't I be limited to a resolution of 1s? I mean, I suppose the chunks could be 100ms long which may be small enough to not cause dropout... but it could get unwieldy to keep all the chunks around forever just in case the user wants to add another clip.
– Will Kessler
Nov 25 '18 at 5:49
The above start(1000) is just an example, you could use 100 instead of 1000 if that works for you.
– CryptoBird
Nov 26 '18 at 14:41
As far as I know, this is the only easy way to do that, and please, keep in mind that, It's not easy to crop out an exact specific part of an audio file ( 5.27s as you mentioned in your comment above), cause, it contains multiple sound signals.
– CryptoBird
Nov 26 '18 at 14:44
Please, let me know if there is any other easy way to do that. Thanks.
– CryptoBird
Nov 26 '18 at 14:45
add a comment |
If I get what you're trying to do correctely, here is my way to deal with this problem:
1- We should declare a blob array and store data in it:
var chuncks = ;
2- We need to know the number of seconds recorded :
We should use the start(1000) method with the timeslice parameter in order to limit the number of milliseconds in each blob,
in this case, we know that every chunk of data is approximately 1 second long ( chunks.length = number of seconds recorded).And to fill in the chunks array with data, ondataavailable event handler
is our friend :
mediaRecorder.ondataavailable = function( event ) {
chunks.push( event.data );
}
Now, we have to deal with simple Arrays instead of Buffer Arrays :-D (In your example, we should delete or override the last 5 items and keep adding blobs to it)
3- And finally, when our recording is ready (chunks array):
we should join all together using the Blob constructor:
new Blob(chunks, {'type' : 'audio/webm;codecs=opus'});
I hope this helps :-D
If I get what you're trying to do correctely, here is my way to deal with this problem:
1- We should declare a blob array and store data in it:
var chuncks = ;
2- We need to know the number of seconds recorded :
We should use the start(1000) method with the timeslice parameter in order to limit the number of milliseconds in each blob,
in this case, we know that every chunk of data is approximately 1 second long ( chunks.length = number of seconds recorded).And to fill in the chunks array with data, ondataavailable event handler
is our friend :
mediaRecorder.ondataavailable = function( event ) {
chunks.push( event.data );
}
Now, we have to deal with simple Arrays instead of Buffer Arrays :-D (In your example, we should delete or override the last 5 items and keep adding blobs to it)
3- And finally, when our recording is ready (chunks array):
we should join all together using the Blob constructor:
new Blob(chunks, {'type' : 'audio/webm;codecs=opus'});
I hope this helps :-D
edited Nov 24 '18 at 17:09
answered Nov 24 '18 at 15:27
CryptoBirdCryptoBird
16012
16012
Thank you, this is a useful tip. I will study the start() method. But as I commented above, what I really need is to be able to preserve an arbitrary number of milliseconds of the first clip and combine it with the entirety of the second clip. With your suggestion, wouldn't I be limited to a resolution of 1s? I mean, I suppose the chunks could be 100ms long which may be small enough to not cause dropout... but it could get unwieldy to keep all the chunks around forever just in case the user wants to add another clip.
– Will Kessler
Nov 25 '18 at 5:49
The above start(1000) is just an example, you could use 100 instead of 1000 if that works for you.
– CryptoBird
Nov 26 '18 at 14:41
As far as I know, this is the only easy way to do that, and please, keep in mind that, It's not easy to crop out an exact specific part of an audio file ( 5.27s as you mentioned in your comment above), cause, it contains multiple sound signals.
– CryptoBird
Nov 26 '18 at 14:44
Please, let me know if there is any other easy way to do that. Thanks.
– CryptoBird
Nov 26 '18 at 14:45
add a comment |
Thank you, this is a useful tip. I will study the start() method. But as I commented above, what I really need is to be able to preserve an arbitrary number of milliseconds of the first clip and combine it with the entirety of the second clip. With your suggestion, wouldn't I be limited to a resolution of 1s? I mean, I suppose the chunks could be 100ms long which may be small enough to not cause dropout... but it could get unwieldy to keep all the chunks around forever just in case the user wants to add another clip.
– Will Kessler
Nov 25 '18 at 5:49
The above start(1000) is just an example, you could use 100 instead of 1000 if that works for you.
– CryptoBird
Nov 26 '18 at 14:41
As far as I know, this is the only easy way to do that, and please, keep in mind that, It's not easy to crop out an exact specific part of an audio file ( 5.27s as you mentioned in your comment above), cause, it contains multiple sound signals.
– CryptoBird
Nov 26 '18 at 14:44
Please, let me know if there is any other easy way to do that. Thanks.
– CryptoBird
Nov 26 '18 at 14:45
Thank you, this is a useful tip. I will study the start() method. But as I commented above, what I really need is to be able to preserve an arbitrary number of milliseconds of the first clip and combine it with the entirety of the second clip. With your suggestion, wouldn't I be limited to a resolution of 1s? I mean, I suppose the chunks could be 100ms long which may be small enough to not cause dropout... but it could get unwieldy to keep all the chunks around forever just in case the user wants to add another clip.
– Will Kessler
Nov 25 '18 at 5:49
Thank you, this is a useful tip. I will study the start() method. But as I commented above, what I really need is to be able to preserve an arbitrary number of milliseconds of the first clip and combine it with the entirety of the second clip. With your suggestion, wouldn't I be limited to a resolution of 1s? I mean, I suppose the chunks could be 100ms long which may be small enough to not cause dropout... but it could get unwieldy to keep all the chunks around forever just in case the user wants to add another clip.
– Will Kessler
Nov 25 '18 at 5:49
The above start(1000) is just an example, you could use 100 instead of 1000 if that works for you.
– CryptoBird
Nov 26 '18 at 14:41
The above start(1000) is just an example, you could use 100 instead of 1000 if that works for you.
– CryptoBird
Nov 26 '18 at 14:41
As far as I know, this is the only easy way to do that, and please, keep in mind that, It's not easy to crop out an exact specific part of an audio file ( 5.27s as you mentioned in your comment above), cause, it contains multiple sound signals.
– CryptoBird
Nov 26 '18 at 14:44
As far as I know, this is the only easy way to do that, and please, keep in mind that, It's not easy to crop out an exact specific part of an audio file ( 5.27s as you mentioned in your comment above), cause, it contains multiple sound signals.
– CryptoBird
Nov 26 '18 at 14:44
Please, let me know if there is any other easy way to do that. Thanks.
– CryptoBird
Nov 26 '18 at 14:45
Please, let me know if there is any other easy way to do that. Thanks.
– CryptoBird
Nov 26 '18 at 14:45
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53455124%2fhow-can-i-overwrite-part-of-web-audio-with-new-recorded-audio%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
just saving the bufferArrays passed to a number of Blob constructors may help. If you have the orig. corresponding arrays , you may add, subtract them from an aggregating bufferArray and pass that last BA to a new blob constructor, thereby realizing the reqmt for mix & match of audio clips.
– Robert Rowntree
Nov 24 '18 at 15:55
github.com/higuma/mp3-lame-encoder-js/blob/master/src/… - mp3 example of where you might adjust the raw , arrayBuffer in order to mix, aggregate respective clips and their orig. arrayBuffrs.
– Robert Rowntree
Nov 24 '18 at 16:04
1
npmjs.com/package/blob-to-buffer to get back to arrayBuff from a blob.
– Robert Rowntree
Nov 24 '18 at 19:46
Thank you for the tips. This may help me indeed. However, I'm still not sure if I can just copy an arbitrary part of the first bufferArray into a new array, then aggregate with another bufferArray (using Blob), since I don't know how to relate number of bytes to copy, to milliseconds of recorded sound. Ie, suppose I need 5.27s of the first clip to be combined with the entirety of the second clip. How would I calculate how many bytes to copy out of the first bufferArray?
– Will Kessler
Nov 25 '18 at 5:47
1
i would also look at ffmpeg on node back end to manage the timestamps for you. you would only do CLI or API calls and ffmpeg does all the timestamp stuff 4 u. downside is all the clips have to be on the cloud/back end for CLI or API to work.
– Robert Rowntree
Nov 25 '18 at 17:28