Grasshopper

algorithmic modeling for Rhino

Hey I have a few questions regarding how to interpret sound in grasshopper.
1. How to eliminate the super tall line at the end of the sound wave?
2. How to smoothen the whole sound curve?
3. How to extract the max. or min point (top & bottom of the amplitude) of each sound length? or a specific domain of the z value of the curve?

Thanks very much,
DC

Views: 6468

Replies to This Discussion

1) Depends on where this click is coming from. You could either clip the range of amplitude values or shorten the list of samples.

2) I'd do this with a script averaging over your samples. You could build this in GH logic but everything that comes to my mind involves rather unneccessary amounts of data duplicating.

3) There's a component called Bounds which return a bounding(min/max) interval for any list of input samples.

P.S.: How do you get your sound into GH?

Capturing sound http://www.food4rhino.com/project/firefly

2. Why script ? Interpret values as Z coordinate of series of point3d. Then create a curve and use smooth curve component (util panel)

I wouldn't want to do this, as sound usually involves heavy loads of samples. Better handle all data fixup without geometry involved.

Already each components duplicates the data at the input and quite often at the output again. Then you don't know what extra steps are taken when the curve is smoothed...So to handle all this in one single step without the extra duplications might have a considerable impact on memory usage and probably speed.

This is a group project. We get the sound from the mic using the use new command on the updated firefly. Then we have defined the amplitude and wavelength.

Oh, should have updated Friefly. :D

So to get this straight: you're displaying a spectrum? Then the spike usually is from either calculation artifacts in the FFT or you have a bad mic/connection/soundcard that gives you a DC offset at the ADC input.

I don't know how fireflys averaging/smooth components work, but they might be, what you are looking for to smooth the data.

Hi Wildon,

I'll try to explain how the sound capture component works in hopes to better answer some of your questions.  Basically, the sound capture component samples the microphone channels for a given duration (in your case your using 0.075 seconds as the time interval to sample the sound).  The maximum duration is 1 second which will return a list of 44100 waveform values (with 22050 samples per the left and right channels).  Using your time interval will reduce the length of time to sample the data, and thus you'll have less data points (in my tests with your settings I get 1653 datapoints per channel).  These datapoints form the waveform for each channel, but essentially it's just a list of numbers.  You can easily do whatever you want to a list of numbers (like use the Minimum component under Math/Utility to return the lesser of two numbers... you could feed it your list and then set a cutoff number and it would always return the lesser of the two numbers... so, you could envision this sort of like a cap (or in this case it would be similar to a low pass filter where low frequencies are allowed to pass through, but high frequencies are cut off).  Similarly, you could use a Maximum component to return the larger of two values, so you could cut off the lower frequencies too, if you were only interested in the middle portion (using the low pass and high pass filters together is basically a band pass filter).  Anyway, perhaps these components would help you in terms of taking out some of the spikes in your data.

Hannes, brought up the FFT.  Right now the waveform numbers are not being filtered in any way (they're just the raw sound data).  I plan on implementing a version of the Fast Fourier Transform (FFT) to analyze the frequency spectrum for it's primary frequencies... but I haven't yet done this.  So, I don't think the spike is really coming from that.  More than likely it's just that you've taken a sample where there was a very loud or high frequency sound that occurred right toward the end of the sound capture.  

Lastly, you mentioned you were interested in smoothing the samples.  You might want to look at the Volume output.  Essentially the volume output is the average of the absolute value of all of the samples recorded in both the left and right channels.  The maximum value this could ever be is 1.0 and the minimum is 0.0.  Usually it's a pretty small number, but if you introduce some loud sound, you'll see the volume level increase.  Since, this number is averaged, it's already relatively smooth... but you could feed this into either the Moving Average Smoothing component or the new Temporal Smoothing component to add additional smoothing techniques to this value.  Both are good for different things, but the Temporal Smoothing component could work nicely here.  Anyway, the volume number will give you a single value of the intensity of the waveform at any given time, so maybe this could be useful for what your trying to do.

HTH,

Andy 

The acoustical engineer in me urges to correct a few things...

First, please be careful with your wording. j1988, wavelength is the distance, a sound wave travels in one full cycle. So this word is usually tied to a frequency and might be used synonymously. If you are talking about the length of your captured waveform use capture length, recording interval or even wave length. You had me thinking you where displaying a spectrum (amplitude over wavelength or frequency)

Andy, please use frequencies only if a signal is in the frequency domain of you have some components actually handling frequencies. The tooltip for your component might lead to the same confusion as j1988s wavelength. Otherwise this component is a great addition.

About the min/max: these guys will clip the signal in the time domain. Cliping can be used to constrain the signals amplitude. They do not remove any frequencies. Clipping actually introduces high frequency content to the signal.

If you want to use the averaging/smoothing components, make sure you do so on the absolute values. Otherwise you will just end up with averaging everything to zero.

Thanks Hannes for the clarifications.  I will make sure to be more consistent in the naming conventions in the future, and will make changes to the sound capture component so it will be correct in the next release.

Hello Andy .. Firfly tabs ( networking and audio ) do not show im my GH .. I am using rhino and grasshopper latest versions , also firefly latest .. windows 64 bit .. 

I would appreciate your help as i need the sound components and can not figure it out ..

Thanks for your clarification Hannes. Will wavelength change over time as one speaks through the mic?
One of my issue is how to smoothen the sound wave to be analyzed without freezing my pc?

Attachments:

The captured sound is a mixture of frequencies (wavelengths) so yes, Wavelength will change. The number of samples and the capture length is what you feed into the capture component. Both will be constant (if you don't change them manually).

First thing you should clarify is, what features of the sound you want to visualize/use. Like Level(Volume), Frequency content, whatever... The raw input data as it comes form the soundcard is giant blob of mostly noise. To smoothen it, usually means filtering out high frequency compnents. Typically this also results in slower samplerates.

GH isn't meant to be a real-time audio processor. It's single threaded, so GH uses only one of my availabe 8 cores. Most of GH's data management would be avoided in an audio environment. Moving data from one component to the next will take some time. This is why I suggested to do all the data manipulation within a single script component. Like Andy already said, for one second of mono sound, the capture component will return 22k samples(points). (Which already is half the data rate of CD Audio.) So basically everything you do with sound in GH is likely to freeze your PC.

On my PC, making a point for each sample in 1 second of captured sound and interpolating a curve though them already takes about 0.6 seconds. No smoothing, no nothing... no chance for real time.

Hello everyone

I am new to grasshopper and i want to make the curve (shown on photo above) from sound waves can you help me.

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service