algorithmic modeling for Rhino
A one hour test with speech recognition in Grasshopper. It's the first time I've ever used the Windows in-build recognition engine, or indeed invoked it through code. I'd say definitely mixed results.
Tags:
Comment
Hi david!, Would you share the code for speech recognition or give some ideas for reconstructed?
Regards
L
In combination with Horster reference....could definitely provide pretty interesting interaction.
I wonder if it can work in 3d space with rhino. http://www.technologyreview.com/view/428350/the-most-important-new-...
LeapMotion.... Douglas Adams was right. Soon we'll all be frozen in our chairs in fear of triggering another unwanted motion command.
In combination with this: http://leapmotion.com/ might be a powerfull solution.
You'd need a good headset and be alone in a space before it would be even remotely conceivable that it would help. But I can see how it might be an interesting way to deal with an application after a while. One of the worst things a UX designer can force a user to do is move her hand from the mouse to the keyboard and back. You get this with short-cut keys that cannot be grabbed with one hand and also with command entries. Voice commands are certainly one way out of this pit.
I think even if it will work with 100% accuracy, it would not be used. The problem is ergonomy and some mind-connections ;). When you want to do something on your pc, you just sit, take mouse in hand and do it. when you need to speak to computer it just wastes too much energy... you get really tired after all. (on the other hand - remember wall-e ?). I dont say that its useless-we can find some situations where it would be a real advantage (handicapped ?).... and now youll probably say "thank you capt. obvious", so its the time to end this post.
Early this morning I asked David if all this was possible to try to streamline my workflow... I run xp so I dont have great built in voice recognition. I downloaded a program called my voice controller http://www.5hyphen.com/ and by changing the profile to call up components using this method from david:
If you can have your speech software send the following keyboard presses:
F4 X Enter where X is the alias or name of a component then you can do it.
I don't know if it would be intuitive or not. First it has to actually work with a fairly high accuracy. I suppose if you can just say a word and have a palette with possible component icons pop up would be faster than double clicking and typing a key phrase.
Would it be intuitive to use speech recognition as components initialization ?
Welcome to
Grasshopper
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
Added by Parametric House 0 Comments 0 Likes
© 2024 Created by Scott Davidson. Powered by
You need to be a member of Grasshopper to add comments!