Independent Science Research Quarter 3

So far, I have integrated holograms of the E-lab into my augmented reality human-machine-interface, set up multiple sensor nodes for my data monitoring, networking, and automation project, coded scripts to log data as CSV, and programmed a web interface for reading the data.
Quarter 3 summary video: http://energylab.hpa.edu/public/college/davy_ragland/Media/Projects/Davy_Ragland_Q3.mov
Mirror: https://www.youtube.com/watch?v=fIMZsBSqPZE
I added a dynamic object tracking HUD to my augmented reality hologram interface.
http://physics.hpa.edu/users/dragland/weblog/59682/Independent_Science_Research_Week_1.html
I worked with EXG Technologies again.
http://physics.hpa.edu/users/dragland/weblog/99018/_Independent_Science_Research_Week_2.html
I began porting my holographic interfaces to IOS with XCODE.
http://physics.hpa.edu/users/dragland/weblog/49834/Independent_Science_Research_Week_3.html
I created an augmented reality hologram of the Energy-lab for the google glass from the drone's 3D mapping.
http://physics.hpa.edu/users/dragland/weblog/000af/Independent_Science_Research_Week_4.html
I began my data monitoring, networking, and automation project.
http://physics.hpa.edu/users/dragland/weblog/31096/Independent_Science_Research_Week_5.html
I soldered an XBee RF module to the Op-amp of an energy monitoring sensor, so that I can stream its data wirelessly.
http://physics.hpa.edu/users/dragland/weblog/543bd/Independent_Science_Research_Week_6.html
I set up the Raspberry Pi and began coding my Python data logging script.
http://physics.hpa.edu/users/dragland/weblog/bf62f/Independent_Science_Research_Week_7.html
I programmed my code to write sensor data to a CSV.
http://physics.hpa.edu/users/dragland/weblog/2a58a/Independent_Science_Research_Week_8.html
I coded the Web interface for my data system in HTML, CSS, and Javascript, so that I can easily view my data.
http://physics.hpa.edu/users/dragland/weblog/33579/Independent_Science_Research_Week_9.html

0 comments

Independent Science Research Week 5

I found out how to export readable CSV data from the muse headset, so that we can pull data for the sleep apnea prediction device.
http://physics.hpa.edu/users/dragland/weblog/37856/Sleep_Apnea_Prediction_2032015.html
I began soldering the sensor node hardware for my data monitoring, networking, and automation project.
http://physics.hpa.edu/users/dragland/weblog/45049/Data_Monitoring_Networking_and_Automation__242015.html
I finished soldering the receiver node.
http://physics.hpa.edu/users/dragland/weblog/582a0/Data_Monitoring_Networking_and_Automation_252015.html
I finished building and configuring the receiver and transmitter nodes.
http://physics.hpa.edu/users/dragland/weblog/5c922/Data_Monitoring_Networking_and_Automation_262015.html

0 comments

Sleep Apnea Prediction (2/03/2015)

Today I figured out how to extract CSV data from the Muse headset.
I then downloaded the Muse developer SDK, which came with Muse IO, which streams the data using the Open Sound Control protocol over UDP to a specific port, Muselab, which visualizes the data from each channel and saves it to a .MUSE file, and Muse Player, which converts the .MUSE to a .CSV.
http://www.choosemuse.com/developer-kit/
I then connected the device through blutooth, and streamed the OSC data to port 5000.
Next, I opened Muselab and visualized the data coming in on that port.
Here is what my brainwaves look like.
While I could save it as a .CSV file, all that changes is the text encoding, and the data itself comes out as unreadable garbage, as it is in a proprietary file format.
I then used Muse Player to convert the .MUSE to a .CSV
Now, the data is an a much more readable format.
While this does give a readable CSV file, the formatting is weird as the Muse has hundreds of data channels, measuring things like "battery level" and "is jaw clenched".
In addition to the hundreds of channels per timestamp, there seems to be multiple values for each channel under the one timestamp. For example, there are 20 "eeg"s per timestamp, and 4 values for each of those, so it is unclear how the data is formatted.
The next challenge is understanding how the data channels are formatted in the CSV.

0 comments

Independent Science Research Week 3

I continued working with Karen for EXG Technologies.
I worked on porting my holographic augmented reality immersive human machine interface to the iPad with Xcode.
http://physics.hpa.edu/users/dragland/weblog/ac472/Augmented_Interactive_Holograms_IOS_XCODE_1202015.html
I finally got my application to install, but it would not run.
I also attempted to port my Leap Motion Oculus Rift immersive human machine interface to the Mac, but it crashed whenever I tried running my program.
http://physics.hpa.edu/users/dragland/weblog/37163/Augmented_Interactive_Holograms_IOS_XCODE_1222015.html

0 comments

Augmented Interactive Holograms IOS XCODE (1/20/2015)

For the EXG technologies sleep apnea prediction project, we decided to continue working with the Muse over the ifocus band, because of its versatile SDK.
I also continued working on exporting my holographic augmented reality interface to the iPad.
I started by taking the Unity Assets I created and importing them into Unity.
From there I built for IOS, using com.ragland.davy.hologram as the bundle identifier.
I then opened the xcode project, and set the code signing identities to Dr. Bill's apple developer account.
From here, I set the bundle identifier to com.ragland.davy.hologram, and the team to Dr. Bill's account.
I then edited the schemes so that they would run as an executable.
Next, I cleaned the project, and archived it.

0 comments

Independent Science Research Week 2

I began working for EXG technologies again, and worked with Karen to determine how to create a consumer sleep apnea prediction device.
http://physics.hpa.edu/users/dragland/weblog/8017d/Sleep_Apnea_Prediction_1132015.html
I worked on porting my augmented reality holograms to IOS.
http://physics.hpa.edu/users/dragland/weblog/edebe/Augmented_Interactive_Holograms_IOS_Build_1152015.html
I presented my research to a group of faculty, teachers, and students.
http://physics.hpa.edu/users/dragland/weblog/27db5/Immersive_Human_Machine_Interfaces_Presentation_1162015.html

0 comments

Sleep Apnea Prediction (1/13/2015)

Today I worked with Karen Crow to help plan out how a sleep apnea prediction application would work. We worked out how to build from where we are now to a unified mobile application which would integrate the live data from the electroencephalogram along with the CO2 sensor through one connection, and accurately graph it, providing a user interface that can detect patterns in the data.
Here is our collaboration:
Dear Karen,
-Intercepting the EEG data works by pairing the headset with the computer through bluetooth and the sending it to the developer application, which provides a live visualizer and saves it as a .muse log, which we will then find a way to convert to CSV.
-The data is .muse, and there should be a way to use the developer tools to convert that to .CSV.
-On the mobile application, we only get "concentration, and "attention" levels.
​-The data is stored locally on the hard drive as an individual file per session.
-There is no notification that pops up with the EOG, but it can clearly be seen by looking at the change in the shape of the waves.
-Once the EEG data is converted to CSV, the timestamp should automatically be included, so graphing it alongside the CO2 data should not be difficult. However, getting the CO2 data itself is currently only possible on a computer, as it uses a USB connection, and we cannot forward that to a bluetooth connection, because devices can only pair with one other device as a time.
-Also, the CO2 data and the EEG data would be stored as two separate files, but with the same timestamp, unless we were able to physically engineer the CO2 sensor into the circuitry of the EEG, so that the data stream is embedded into the EEG data, but that would be unnecessarily complex.
-In terms of data encryption, CSV has no data encryption, but is useful because any graphing program can take it in as input. If we wanted to encrypt it, we could create our own proprietary file type, similar to the .muse, but would run into trouble analyzing the data unless we were to write our own graphing and analyzing programs from the ground up.

Ideally, we want a mobile application that connects through bluetooth to the device, captures the data, and provides an interface for viewing the data.
Currently, the CO2 only works on PC, needs USB, but the data is great, and exports as CSV. The Muse can connect to mobile devices, but the mobile applications are geared towards consumers and only show “concentration” and “attention” data. The Mac SDK however, can stream the live data from the paired device to a developer application that saves to a .muse file, which we will figure out how to convert to CSV, it will probably need another developer application made by Muse. Muse also provides a PC SDK, but I have not tested it out yet.

What we will need to do is choose one platform, and find a way to stream the data through one connection, as the bluetooth protocol can only pair with one device at at time. Thus, we would either have to build the CO2 data stream into the muse headset, so that the data is all through one paired connection. An alternative would be utilizing the micro usb connection on Android phones for the CO2, but we would need a proprietary cable for the Iphone. Streaming the raw data itself should be possible on any platform, once the connections are made, as an internal server would simply stream the live data to a specific port, and the EXG application could build up the analyzing interface from the ground up. However, this would loose all the benefits of having applications and developer tools premade for the hardware.

Right now, using a PC to run both the CO2 SDK and the Muse SDK can get data from the USB connection and the Bluetooth stream, which will be saved into individual CSV files on the local hard drive for each sensor.
However, This would require a PC, setting up the stream with developer tools, and manually analyzing the CSV data, which is not be ideal for the user.

Thank you,
-Davy Ragland

0 comments

Independent Science Research Winter Break

During the winter break I got the finished the Leap Motion and Oculus Rift immersive human machine interface I made, so that I can see in 3D, work with my hands in 3D, overlay the virtual environment onto reality, deal with data as both a value and a physical object, and open files, all through intuitive hand gestures inside the 3D workspace. I also worked with Google Glass, and was able to create an augmented reality control system where I can simply look at a physical marker in order to execute XML commands to the web relay, pulling data from sensors and even turning on lights. I also got 3D rendering on the Google Glass working, so that I could attach my own 3D holograms to the physical markers, which I were able to make interactive by using the rudimentary image tracking to follow my hand gestures, in addition to creating virtual buttons that react to my hand motions.
I implemented the raw image data from the Leap Motion into my interface as an augmented reality input mechanism.
http://physics.hpa.edu/users/dragland/weblog/037b2/Leap_Motion__Augmented_Reality_12072014.html
I worked on redesigning the HPA website.
http://physics.hpa.edu/users/dragland/weblog/9f329/Website_build_and_complete_redesign_1282014.html
I updated my 3D virtual reality website and aligned the raw image data with the virtual hands.
http://physics.hpa.edu/users/dragland/weblog/3ffe5/Leap_Motion__Augmented_Reality_12092014.html
I analyzed the apache logs on the web server, and I was able to track who saw my college webaite, and when.
http://physics.hpa.edu/users/dragland/weblog/685bd/Apache_Log_Analysis_12102014.html
I 3D printed a head mount of the Leap Motion, so that it would fit on the Oculus Rift.
http://physics.hpa.edu/users/dragland/weblog/b09fe/Leap_Motion__Oculus_Rift_3D_Printed__Mount_12112014.html
The website redesign team prepared for a presentation of our progres.
http://physics.hpa.edu/users/dragland/weblog/dfb67/Website_build_and_complete_redesign_12122014.html
We continued preparing our presentation.
http://physics.hpa.edu/users/dragland/weblog/52ff3/Website_build_and_complete_redesign_12142014.html
We gave our presentation to the important adults in charge of the situation.
http://physics.hpa.edu/users/dragland/weblog/b1faf/Website_build_and_complete_redesign_12152014.html
I began learning the basics of developing for Android, as that is the OS Google Glass uses.
http://physics.hpa.edu/users/dragland/weblog/7d2d8/Android_Development_12162014.html
I continued making basic applications.
http://physics.hpa.edu/users/dragland/weblog/149b3/Android_Development_12172014.html
I learned how to install full APKs onto the Google Glass by installing modified samsung drivers, so that I can get around Google's standard API limitations.
http://physics.hpa.edu/users/dragland/weblog/9d974/Google_Glass_APK_Installation_12182014.html
I used QR codes on the Google Glass to serve as an augmented reality marker system, where I had XML commands parsed as URLs inside the markers, so that by looking at the QR code with the Google Glass, I can turn on the lights.
http://physics.hpa.edu/users/dragland/weblog/f1a0a/Google_Glass_Augmented_Marker_Control_System_12192014.html
The Leap Motion to works with my 3D website now.
http://physics.hpa.edu/users/dragland/weblog/ab07f/Virtual_Reality_Website__Leap_Motion_12202014.html
I expanded the Google Glass augmented reality control system by creating more physical markers that parse XML commands as URLs.
http://physics.hpa.edu/users/dragland/weblog/ab07f/Virtual_Reality_Website__Leap_Motion_12202014.html
I successfully got my own 3D environments to render on the Google Glass.
http://physics.hpa.edu/users/dragland/weblog/cb40c/Google_Glass_Unity_APK_Installation_12232014.html
I made my Google Glass 3D environments interactable, so that they can actually take in live data and adjust the display accordingly.
http://physics.hpa.edu/users/dragland/weblog/aca0e/Google_Glass__Unity_3D_Workspace_12242014.html
I got the voice controlled menu of the Google Glass to work with my augmented reality program.
http://physics.hpa.edu/users/dragland/weblog/f55bd/Google_Glass_Augmented_3D_Rendering_12282014.html
I got the touchpad of the Google Glass to work as input, along with the accelerometer.
http://physics.hpa.edu/users/dragland/weblog/0f6b3/Google_Glass_Augmented_Interactive_3D_Rendering_12292014.html
I began working with eh Vuforia SDK, in order to utilize image tracking and recognition as means for interaction with my Google glass augmented reality programs.
http://physics.hpa.edu/users/dragland/weblog/3ef50/Google_Glass_Augmented_Interactive_3D_Control_12302014.html
I tested out the sleep apnea device we made earlier, and I was able to create an augmented reality hologram of myself that is connected to a physical image marker through analysis of the angular displacement of the camera data.
http://physics.hpa.edu/users/dragland/weblog/b7d47/Google_Glass_Augmented_Interactive_3D_Holograms__Sleep_Apnea_Electroencephalography_data_analysis12312014.html
I got the textures of the hologram to look more like a SciFi hologram, and I 3D printed a projector to house the image target, so that it would look like the hologram is coming out of that.
http://physics.hpa.edu/users/dragland/weblog/17b71/Google_Glass_Augmented_Interactive_3D_Holograms_112014.html
I was able to form my own rudimentary had gesture tracker by utilizing the environmental image tracking to get the holograms to actually follow my hands around, making them interactive in an intuitive way.
http://physics.hpa.edu/users/dragland/weblog/1583a/Google_Glass_Augmented_HandMotion_Interactive_3D_Holograms__Website_build_and_complete_redesign_122014.html
I made my holograms actually transparent, so that they look even more SciFi, and I began expanding the hand-gesture intractability by programming virtual buttons.
http://physics.hpa.edu/users/dragland/weblog/0718f/Google_Glass_Augmented_Interactive_3D_Holograms_132014.html
I added more holograms to my augmented reality Google Glass interface.
http://physics.hpa.edu/users/dragland/weblog/2528e/Google_Glass_Augmented_Interactive_3D_Holograms_142014.html
I got the virtual buttons to execute code when a region of the image is obscured, meaning that I can load the hologram, and swipe my hand across it to trigger an event, which then loads the URL opening program I wrote, and allows me to parse the XML commands to the web control relay as addresses, so that I can look at a light, wave my hand across it, and turn it on.
http://physics.hpa.edu/users/dragland/weblog/1d722/Google_Glass_Augmented_Interactive_Virtual_Button_3D_Holograms_152014.html

0 comments

Google Glass Augmented Interactive 3D Holograms + Sleep Apnea Electroencephalography data analysis(12/31/2014)

Today I tried to see it the BrainBand XL headset we used for the sleep apnea prediction device can accurately tell when a person's eyes are closed through the Mindstream exported CSV data.
I started by researching online, and I found this academic reserch paper http://www.ncbi.nlm.nih.gov/pubmed/17911042 which said that one should look for canges in the alpha brainwaves when looking for eye closing and opening.
I then setup the sensor we made earlier, connected the EEG, streamed the data, and wrote to a CSV file.
I had my eyes open until the 6 minute mark, and closed them until the 7 minute mark.
I then graphed the appropriate values:
It seems that there is no correlation between the alpha levels and the time.
This may be because of the slow refresh rate (1 data point per 3 seconds), because it is wireless and there maybe atmospheric interference, or because the CSV streaming program streams all the raw data, and does not try to remove or correct hardware interference like a commercial product would.

For my Augmented reality hologram program, I changed the configurations for the camera object, so that there would be a layer of reality to work with.
I then imported the image tracker object into my scene.
Then attached the correct graphics to the augmented reality image marker.
I then tried importing the 3D model of me, but it was impossible to set up, as the physical model was not placed in the origin.
Thus, I had to open it up in netfab, and move the mesh to the center of the .OBJ
Here it the hologram of myself placed on top of the image marker.
I then exported the build for android, and it worked perfectly!
However, it was hard to see, so I had to go in and edit the materials.
I gave them all a transparent blue glow, so that it would look like the holograms from Star Wars or Iron Man.
Here is the new hologram.
I exported a build for Android, and again, it worked perfectly!
I then Installed it onto the the Google Glass through the debug bridge, and I ran it.
However, it was significantly more difficult to align the marker with the camera, the build would not install permanently, and the device would overheat.

0 comments

Google Glass Augmented Interactive 3D Control (12/30/2014)

Today I started by attempting to port my immersive human machine Leap Motion and Oculus Rift interface to the web, but It does not work. I suspect it has to do with web browsers not being capable of utilizing that much memory for rendering with non-standard hardware.
I then expanded my augmented reality control system by creating more physical markers around the E-Lab that parse XML commands as unique addresses for the web control relay.
I also began looking into new 3D printers and electroencephalograms for the E-lab, for projects like engineering parts, and sleep apnea prediction.
In addition, I am going over the Arduino components we have, to see what would be fun for the Advanced Computer Technology class.
Here is a good setup guide: http://codeduino.com/tutorials/getting-started-with-arduino/#OSXguide.
Using the Arduino is really easy, the only thing that can be a hurdle is choosing the right port when uploading a program.
I think that reading through this pdf: https://dlnmh9ip6v2uc.cloudfront.net/datasheets/Kits/SFE03-0012-SIK.Guide-300dpi-01.pdf
would be a good way to ease into the world of electronic engineering, and we could go through each tutorial at our own pace, while having the ability to help each other in the class. Each tutorial in the pdf comes with the code, so it is a simple matter of following the wiring directions. Nevertheless, they still teach core concepts about essential pieces of electrical equipments, and provide real world examples to help show how things work.
I have also begun to work with the Vuforia Augmented Reality software development kit.
I then rewrote the manifest code, strings.xml, and my_voice_trigger.xml, so that the APK would export with the correct permissions and voice controls for the Google Glass.
Next, I imported the augmented reality camera prefab into the workspace:
I then had to create a physical marker:
Here is the marker:
I then loaded the APK onto my android device, but ran into issues when executing the application. The camera module would not get any data frames, so all that was rendered was a black screen.
I fixed this by deleting all the plugins and reinstalling.
Now, my camera view program works.
Cameraseption:

0 comments

board of trustees (11/14/2014)

Today I had the wonderful opportunity to present my work to the HPA board of trustees, in order to show them what exactly goes on in the independent science research class. I got to talk about my brain research, the sleep apnea research, my 3D printed cast, and my human machine interfaces that utilize the Leap Motion and the Oculus Rift.

0 comments

Independent Science Research Week 13

I adapted my leap motion file management program to include visual representations of data through 3D objects that exist in space, and are part of a system that can be manipulated through the intuitive laws of physics.
http://physics.hpa.edu/users/dragland/weblog/4c9a2/3D_Virtual_Reality_Human_Machine_Interface__leap_11102014.html
I sucessfuly integrated the Oculus Rift into my program, so that the 3D workspace actually looks 3D, is immersive, utilizes the leap, and can be looked around in.
http://physics.hpa.edu/users/dragland/weblog/ee83f/Oculus_Rift__Leap_3D_Virtual_Reality_Human_Machine_Interface_11112014.html
I maped transformation input to the positioning of the Oculus Rift, so that not only can one look around in 3D, but they can actually walk around an entire digital world that feels like reality. I also programmed the data objects I made earlier to actually change their values when touched. I can now see systems and look around in 3D, walk around the physical environment in 3D, import my hands into the 3D workspace, physically manipulate the data as objects, change the values of the data through these interactions, and even map specific hand gestures to functions, such as opening a file.
http://physics.hpa.edu/users/dragland/weblog/d3bc9/Oculus_Rift__Leap_3D_Virtual_Reality_Human_Machine_Interface_11122014.html
I, along with Caylin, finished a video for the PGC finals.
http://physics.hpa.edu/users/dragland/weblog/e18ea/Project_Green_Challenge_Finalist_Video_11132014.html
I also got to present my work to the board of trustees.
http://physics.hpa.edu/users/dragland/weblog/c1f06/board_of_trustees_11142014.html

0 comments

EXG Data from Electroencephalogram (8/4/2014)

Because our homemade EEG was not performing as well as we hoped, we turned back to attempting to get the raw data from the brainband. After quite a bit of searching, I found this program, written by Eric Blue.
http://eric-blue.com/2011/07/24/mindstream-neurosky-eeg-data-streamer/
THis program uses Java to stream the live data out of the thinkgear socket, and into other applications.
Here is the .bat file.
Here it is updating live.
Once I turned on the brainband, this is the output I received.
As you can see, there are concrete numbers, rather than fancy visuals, allowing me to perform detailed analysis on the data.

0 comments

EXG Sensor Apparatus (8/11/2014)

Today I continued to try to get the EEG data on the new PC.
Everything worked fine except the mindstream system tray application as the .bat file would immediately close after I attempted to launch it.
I then edited the script and put a pause at the end of the program, allowing me to see the error message in the terminal.
Next, I saw that it would not take "java" as a recognized command, despite the fact that I installed the latest version.
I then went into the java settings of the control panel and found that the java runtime path was different from the script.
Even after I changed it, it still would not run.
I decided to do a factory reset of the PC overnight and install everything one at a time in a careful manner, just incase java being installed originally somewhere else was the issue.

0 comments

EXG Sensor Apparatus (8/12/2014)

I attempted to fix the java problem by reinstalling it again in different directories, to no success.
I tried enabling some of the advanced debugging features, but that did not help either.
I think it may be a problem with the version of java installed, or the autorun.bat file. It may even be that the mindstreaming application was built for 64 bit machines, not 32 bit.
Until then, live data cannot be streamed from the electroencephalogram.

0 comments

EXG Sensor Apparatus Working (8/13/2014)

I finally fixed the problem.
After reinstalling, doing a factory reset, changing the java path, going through the advanced security settings, and the debug logs, I found that the issue was 32 bit vs 64 bit.
Apparently the browser thought that I was using a 32 bit operating system so it downloaded Java for 32 bit, but I actually was using a 64 bit machine.
Once I downloaded the 64 bit version of Java, the mindstream application worked perfectly.
I suspect that this was an issue of backwards compatibility, meaning that the mindstream application only works on 64 bit systems.
Nevertheless, I can now collect data from the CO2 sensor and the electroencephalogram, on a machine other than my own.
All that is left is to create a readme how to guide, making it easy for somebody else to use it, and create a compact housing for the sensors.

0 comments

EXG Sensor Apparatus (8/6/2014)

Today our simple PC came in, which we would put all the necessary software on and configure for our sensors.
This computer would be sent with the sensor, making it a simple plug-and-play system that would work for anybody.
I began by downloading Gaslab, along with the .net framework, and it ended up working fine on the new PC.
I also started to create a readme.txt on the Desktop, which would help the user chose the right settings.
I then downloaded the thinkgear connector, along with Eric Blue's Mindstream and the latest Java.
However, I then found that the PC was not Bluetooth enabled, so I had to find a USB Bluetooth receiver, and that needs to be sent too.
It then detected the EEG, but I could not get Mindstream to run, as the .Bat file would immediately close after I tried running it.
I will look into this problem.

0 comments

EXG Testing Our Own Electroencephalogram (7/31/2014)

Now that our electroencephalogram was working, we hooked it up to our digital oscilloscope and tried it out.
It didn't look right.
We then tried each individual operational amplifier, but the data did not work with our expectations.
The voltages did not seem to be amplified, and the frequencies did not seem to be filtered.
This may be attributed the broken operational amplifier.

0 comments

EXG Making Our Own Electroencephalogram (7/30/2014)

Today our resistors and capacitors finally arrived.
We then found this connection diagram of our lmc6484 quad operational amplifier, which helped us wire it correctly.
I also had to recalculate new capacitor values in order to avoid connect two larger ones in series.
We then build a single band pass filter with a gain of 10.
Next, we repeated the circuit with the other three operational amplifiers, funneling the output of one into the input of another.
We then connected it to the power rails, and used two identical resistors to bring the voltage down from 5 volts to 2.5 volts.
Here is a schematic we found for the Analog discovery oscilloscope.
We then hooked it up to the computer, but accidentally burned out the operational amplifier because it was put in backwards.
Here is our complete sensor suite, with the two electrodes and CO2 tube on the right.
All we have to do now is troubleshoot it and make it ready for deployment.

0 comments

EXG Sensor extension (7/29/2014)


Today our electroencephalogram connectors arrived, which allow us to simply clip on the disposable electrodes, and connect the ends to our circuit that we will build once the electrodes, capacitors, and resistors arrive.
We also fixed the pump battery problem by using a multimeter to find that there was a bad connection through the black wire.
I then took that wire off and soldered a new wire on.
We then plugged everything in and it worked perfectly. All we have left is to build the electroencephalogram.

0 comments

EXG Sensor Apparatus Housing (7/23/2014)

We tried reprinting the housing case, and while it did not peel, the hole at the top was the wrong orientation, so we could not use it.
Rather than use even more abs plastic, I decided to build a housing out of lego.
I created two slots for the battery packs to stand up on the left,
a slot for the analog discovery oscilloscope, in the middle,
a slot for the electroencephalogram breadboard on the right,
a stand for the K-33 ICB CO2 sensor on the bottom left,
and a slot for the CO2 pump on the bottom right.
I also built it so that the CO2 pipe, and the two electroencephalogram electrodes (forehead + reference) come out of the left side, as they all go to the patient.
In addition, the two USB cords from the analog discovery oscilloscope, and the K-33 ICB CO2 sensor come out of the top, as they both go to a computer for analysis.
Finally, I built it so that the piping can easily allow the the user to breathe air into the sensor, while the pump pulls it, until it is exhausted out of the pump.
Now that everything we need is put together in a compact housing, all we need to do is build the EEG, and make sure everything works.

0 comments

EXG Making Our Own Electroencephalogram (7/21/2014)

Because we had problems connecting the EEG to the computer, and we needed special software to get data from it in a usable format, we decided to make our own simple analog electroencephalogram.
We started by plotting out what an EEG does. First, the brain wave creates a voltage difference, which we need to amplify. We also need to block out the frequencies we don't need. We then researched and found that the total human brain frequency band is 0-30 Hz.
Duncan then taught us how we would engineer such a device.
We are going to use operational amplifiers to build second order band pass filters, with the cutoff frequency we need.
We did more research and found that the cutoff frequency equation is F=1/2piRC.
We then set values of F at 1 and 30, and tried to figure out what capacitors and resistors we would need. The ratio of the capacitors would be 300 and the resistors would be .1, because we needed an amplification of 10 per band pass filter.
This made it hard to find working values, as the capacitors would be incredibly small(10E-7mF)or the resistors would be really small, and would also be really weird numbers that would necessitate complex parallel and series additions. I then decided to use whole numbers for the resistors and capacitors, which would make the band cutoff frequencies weird numbers, but that doesn't matter.
The final numbers we would use are:
C1= .015 mF
C2= 5E-5 mF
R1= 10 Ω
R2= 100 Ω
F1= 1.06 Hz
F2= 31.83 Hz
Once we knew what resistors and capacitors we would need, we drew a circuit diagram. We ended up using four band pass filters, which would give us an amplification of 10,000 and a much stricter cutoff slope. This is necessary because the electrical activity of the brain is in millivolts, and needs to be greatly amplified in order to be read by the computer. We also only needed one electrode and a ground electrode, making our EEG a lot simpler, but still all we need for our purposes, as we want a composite, not localized reading.
The right side of the diagram has three points that would connect it to an analog discovery USB oscilloscope, giving the device power and allowing a computer to read the voltage differences and export it as data.
Here is the software that we will use, which is free, as opposed to the fancy EEG software.
If this works, then we would have a single-electrode electroencephalogram that can provide raw data and will have no connection issues.

0 comments

EXG Electroencephalogram (7/18/2014)

Now that the CO2 sensor was working, we began to integrate the electroencephalogram into the apparatus.
We are using the BrainBandXL, which is far more comfortable, as it is a headband with two metal dots, not a delicate headset with fourteen electrodes. We also do not need that many electrodes, as we are taking a composite look, not a localized look at the brain.
I then connected it through bluetooth, but it would say the device was offline, despite it flashing its blue led.
I then tried on a mac and had the same experience of having it see the device, connect, and then disconnect in about two seconds.
This meant that I could not use the thinkGear Connector, despite it working with the Mindwave Mobile headset.
I then found that the BrainBand XL used a different software called MyndPlay Pro Research and Analysis Tools 2.3, which has the neat feature of exporting .CSV files directly from the GUI, which would work tremendously in our favor.
Also, we reprinted the CO2 sensor housing.
This time the layers stuck together and it came out all in one piece, despite slight bending.
Everything fit, and worked fine with the case.

0 comments

Final Report (4/16/2014)

Today I organized the results of my project into a final report.
The full text is located here:
Download file "Report2014.pdf"

Abstract:

As individuals attempt to solve problems using different methods and make these decisions within their brains, it is logical that through an analysis of a sample’s brain waves, data about the methods of problem solving used can be established. Through background research, the parts of the brain and their functions have been organized as areas in which to search for distinct data. This includes the prefrontal lobe (planning/ predicting), the parietal center (thinking logically/spatially/mathematically), the parietal sides (thinking about the wording of the problem), the temporal lobe (thinking emotionally, remembering past facts), and the occipital lobe (thinking visually). An Emotiv electroencephalogram was used to collect data, which was then visualized both through a chart and a 3D model of the brain, allowing data about the time, location, intensity, and type of brain wave (alpha, beta, theta, and delta) to be collected. Once the data was analyzed, the following results were concluded: there is a preference for use of the left hemisphere, for planning/predicting in the prefrontal lobe, thinking logically/spatially/mathematically in the center of the parietal lobe, thinking visually in the occipital lobe, for creating alpha and theta waves when solving the problem, and for creating a flash of delta waves once the problem has been solved. In the future this knowledge, with additional research, can help us understand the components of thought, and thus, action.

Conclusion:
In conclusion, through the use of an electroencephalogram, distinguishing factors between different methods of problem solving have been established. A strong majority of the samples showed a preference for using the left hemisphere of the brain, while roughly fifty percent had activity spread from the left to the right. The samples also exhibited large amounts of activity when planning/predicting in the prefrontal lobe, large amounts of activity when thinking logically/mathematically/spatially in the center of the parietal lobe, and a consistent amount of theta activity through all the samples when thinking visually in the occipital lobe. This may signify that these three methods are the main methods taken when an individual attempts to solve a problem. These spikes in activity are also composed primarily of alpha and theta waves, suggesting that they are the main frequencies of waves created used when an individual attempts to solve a problem. Finally, one hundred percent of the samples exhibited a burst of delta waves after answering every single question, suggesting that this may even be a universal mechanism. Hopefully, if this experiment were to be repeated with a larger sample size, the results would be consistent, alluding to a deeper underlying pattern in the way individuals solve problems.

However, these results should be taken with the consideration that the brain is a highly variable and complex system and the technology available is limited. Thus, one must not rush into conclusions. For example, the electroencephalograms did not insert detectors into the actual brain tissue, but rather made contact with the skin and measured the electrical activity there. This means that the data itself is inferred by the machine, and is open to artifacts, false positives occurred by a movement of the sensor across the skin, simulating a change in electrical activity. This phenomenon occurred often, as shown by the erratic lines in the chart on the raw data file. In addition, the fact that there was no depth perception means that there is no way to tell overlapping components apart from each other. For example, the temporal lobe, which deals with memory, covers the limbic system, which deals with emotion. Thus, when measuring brain activity on the temporal sensors, it is impossible to tell where the signal is coming from, as opposed to data from the prefrontal sensors. This renders the data from the attempts to solve problems emotionally, through memory, and through the wording of the problem inconclusive. Further complicating things is the fact that brains adapt and grow, allowing multiple brain parts to assume rolls of related ones, making a clear, defined set of brain activities hard to establish. Finally, due to unforeseen circumstances, the sample size ended up being smaller than anticipated, and thus all data is representative of a limited and non-varied sample group. Therefore, it is strongly recommended that the findings of this report be taken with an adequate understanding of the context in which this data was collected.

Here are the raw data files.

Download file "Key01.mp4"
Download file "Data.xlsx"

0 comments

Formal Report (3/31/2014)

Today I created a formal report on my experiment in which I outlined the progress and future of my experiment, and I refined the procedure of my research, based on my newly learned background knowledge.
In my formal report I included a title page , abstract, table of contents, proposal, question, variable, hypothesis, background research, materials list, experimental procedure, data, data analysis, and conclusion. Here is the full document:
Download file "Report2014.pdf"

0 comments

Testing (4/3/2014)

Today I attempted to collect raw data in order to establish a baseline.
I had my subject put on the electroencephalogram and go through the baseline test shown in the last post.
I then recorded her brainwaves as she thought about the promoted ideas, allowing me to infer connections between both the brainwaves and component brain sections to the different methods of thought.
Here is the entire screen captured video:
Download file "Key01.mp4"

0 comments

The human brain in love (5/2/2014)

After reviewing my study on the way humans attempt to solve problems, I accidentally came across something fascinating.
While I was looking at the data for emotional questions, I noticed an unusual response.
The prompt I gave the subject was to "think about someone you love".
In response, the subject's entire brain brightly lit up, the strongest reaction the subject produced during the entire experiment.
Here is an animated gif of the beautiful moment. (click on it to make it play)
At first, the left parietal lobe side and left frontal lobe produce delta waves (low frequency) as the subject hears me say the prompt and interprets it.
Then, most of the brain starts to produce beta waves (high frequency).
Almost instantaneously, the entire brain is covered in beta waves and a sudden flash of theta waves (medium frequency).
Both the intensity and range of the activity is fascinating, as it is the act of thinking of someone you love that creates such a strong reaction.
I feel as if there is something poetic about this quantifiable scientific basis for love.

0 comments

Data analysis (4/8/2014)

Today I analyzed the data from the session mentioned earlier. I took the baseline data and looked at the regions specific to the functions, searching for defining brainwave types.
Having the 3D visualizer and the wave chart side by side was immensely helpful as I could filter out artifacts, while easily finding activity in regions I am looking for.
Here is a screenshot in which there is a huge spike in both beta and theta waves in response to an emotional question:
At second zero, delta waves occur at the left parietal lobe side and the left frontal cortex as the question is heard and interpreted.
At second one, the entire brain lights up in beta waves.
At second two, the entire brain flashes up in theta waves.
Interestingly, this question received the strongest and quickest response from the brain of all.

After looking at the data, I was able to construct an average criteria in order to identify a methodology of thinking:
I also found that after the subject answered every single question, there was a pulse of delta waves.
In addition, it was interesting watching the amplitude change, as we do not have an explanation for that yet.
And finally, It occurred to me that because the USB receivers are interchangeable with the EEGs, then theoretically multiple computers can save data from a single EEG, for example a mac could record the brainwaves, while a pc records the 3D model. However, then the question of which one is being recorded when multiple EEGs are broadcasting arises. I must investigate further.
Anyway, here is the data analysis spreadsheet:
Download file "Data.xlsx"
and the raw data:
Download file "Key01.mp4"

0 comments

Looking for patterns (3/07/2014)

Today I put on the EEG and watched my brainwaves the entire period.
I tried to look for individual parts lighting up as I did specific things like moving my hands, doing math, and visualizing things.
I have come to the conclusion that it is incredibly difficult to isolate artifacts from actual data.
It is even harder to find individual functions.
However, having the brainwave chart alongside it helps a lot.
Here is a screenshot of my brain listenign to everyone talking.

0 comments

Baseline (4/2/2014)

Today I created a baseline test to give to multiple people in order to create an average data set to look for when determining what method of problem solving is being utilized by an individual.
I have created a test in which people think about situations that they will only need to plan/predict, think logically/mathematically/spatially, linguistically, emotionally, through memory, or through visualization.
Then I will create an average baseline from the data of multiple trials and people, creating a general brainwave frequency and area of activity to look for when someone is planning/predicting, thinking logically/mathematically/spatially, etc.
This baseline data set will be immensely helpful in determining what method of problem solving is used, solely from raw data.
Here is the baseline test:
Download file "Key.pptx"

0 comments

Mapping parts & functions (3/06/2014)

After compiling all the background research, I tried to match the parts and functions of the mind to the 3D visualizer model that I will be using to measure brain waves.
Here are the maps I created:
They are not that pretty.

From this map I created, I can infer what is happening when I see waves light up in a certain part of the brain.
However, many parts have multiple functions, making it hard to exactly determine what is going on.
Also, I cannot directly measure the inner parts like the brain stem and limbic system, which also preform essential functions.
Despite these drawbacks, I can still measure the cerebral cortex, which is the most essential part of the brain, as it is the conscious part.

In relation to my experiment, I can look at the following areas to see how individual people attempt to solve problems.
Prefrontal lobe (processing, judging, predicting)

Frontal lobe (voluntary muscle movement)

Temporal lobe (hearing, emotion, memory)

Parietal lobe sides (Language comprehension (left))

Parietal lobe center (math, spatial reasoning)

Occipital lobe (vision)

Thus, the main problem solving strategies I would look into are
visualizing the situation
Thinking logically/ spatially/ mathematically
thinking about the wording of the problem
planning/predicting
thinking emotionally
remembering past facts.

I can also use the colors to determine the type of wave.
Delta waves are orange (1-3 Hz)
Theta waves are pink (4-8 Hz)
Alpha waves are yellow (9-14 Hz)
Beta waves are Blue (15-30 Hz)

My next step would be to wear the EEG while solving problems and see if these are the parts that light up the most, and if I am able to accurately understand what the brain is doing.

1 comment

Presentation and Learning (2/25/2014)

Today I posed for another shot in the video.
Also, Jessica went over the basic interface in her mac based brainwave reading program that analyzes waves much better than my PC version. (however, my PC can create the 3D model of the brain)

0 comments

Presentation (2/21/2014)

Today we helped Mrs. Police create a video for her admissions presentation.
She filmed us working with the brain stuff, and Jessica will provide a voice-over.
Here is the screen captured video:
Download file "Test0003.mp4"
(only 4 minutes this time!)
We set it up so that the chart is alongside the 3D model.
Here is a zoomed in view.
Then we recorded Jessica closing her eyes.
Here is a cool still from the video.
There are a lot of artifacts towards the end.
This video, once the windows elements are cropped out, could be utilized for the presentation.

0 comments

Recording (2/20/2014)

Now that we can record videos, we decided to record an entire session.
It took us 7 minutes to get good connections as one of the USB ports works only part of the time.
Jessica then attempted to play the addictive game "flappy bird"
She showed a spike in theta and beta waves whenever she died in the game.
Most of her activity was concentrated on her frontal lobe (predicting where the obstacles are), and her visual cortex, looking at the game). She died again.
We then zoomed in to get a better view of what is happening.
Here is a spike of Delta waves once she started listening to music.
Here is the full 18 minute video:
Download file "Test0002.mp4"

0 comments

Saving (2/18/2014)

Today I tried to find a way to record not just snapshots, but videos of the brain.
QuickTime would not work for me, so I tried using VLC instead.
After working through the confusing interface, I finally was able to record the screen.
However, It took another 30 minutes to figure out how to save the files, because you have to select the option of saving it manually.
In the end, I was able to record a video of me turning the brain around inside the 3D visualizer.
There is no headset attached, so it is gray and colorless.
Here is the attached video.
Download file "Test0001.mp4"
While this is an improvement over the screen captures that only capture single moments, as now I can sit back and record an entire session, It still only indirectly shows what I see. This means that the 3D file itself is lost, and I cannot go back and move the head around when looking at records, as the angle shown is the only angle recorded. Nevertheless, Having it record the screen is a great convenience.

0 comments

Labeling (2/14/2014)

Today we finished pairing the headsets with their respective usb receivers.
We found that there are 2 working EEGs with interchangeable usb receivers, and 3 working EPOCs with interchangeable receivers.
Also, we found that the EEG filters stuff out, while the EPOC does not.
This procedure took a long time because we had to reapply saline to the sensors between each test.
Also, for some odd reason, the program would say that there are no good connections on the headset sensor diagram, but would register data perfectly when in chart mode.

0 comments

New Headset (2/11/2014)

Today we opened the brand new headset.
Here are some pictures of us being happy.
However the image is flipped as the photo booth treats images like a mirror.
Also, Justin is being Justin.
Here is a picture of it being charged.
Also, the sensors are perfectly clean, not broken, and beautiful.
We then got a label maker, and started organizing the headsets, as well as the usb receivers.
While looking for the usb receivers, I came across a ton of lone sensors, and put them back together.
Next class we hope to finish labeling the usbs and headsets, so that it is easy to pick a headset and find the corresponding usb.

0 comments

More playing with software (2/07/2014)

Today I attempted to learn how to understand the data.

Alina put the EEG on and I got 13/14 connections.
I then decided to create a key to help identify brain waves by their frequencies.
Delta waves are orange (1-3 Hz)
Theta waves are pink (4-8 Hz)
Alpha waves are yellow (9-14 Hz)
Beta waves are Blue (15-30 Hz)
Then, we went back to experimenting with different actions.
Most of her brain activity is on the left half.
Here she is counting with her eyes closed.
Here she is counting in another language with her eyes closed.
Here are some shots of her thinking.
here is a video showing how the software works in real-time.
the localized activity is beautifully captured as we can see what part is active.
We can also see to what extent and in what frequency each part is working.
Here Alina attempts to write with her left hand.
Jessica was then put under the microscope, and we watched her listen to music.
We got perfect connections with her after we put the sensors in the salt water again.
This is what Jessica listened to while her brain was recorded: Red Hot Chili Peppers- "Scar Tissue"
Her brain almost pulsed to the beat.
There was a lot more activity when the vocals kicked in than when it was just instrumental.
Here is a side shot of her brain.
This software is beautiful and a great asset to viewing the activity of the brain.
However, it has a few flaws:
1) The brain will not stop turning, making it impossible to view one single part of the brain for a long time.
2) There is no way to save data, which is why I constantly am taking screenshots. I could take a video using quick time, but it would still be preferable to save the 3D data file, so that I could go in later and turn the model around and re-examine specific areas though the 3D interface.
Despite these two issues, the software itself runs perfectly, and renders a breathtaking visual that is much more intense than a graph.

0 comments

New 3D Brain software! (2/05/2014)

Today I downloaded the new 3D Brain Visualizer software and played around with it for a couple hours.
However, before I could begin, I found that the EEG was not charged.
While the EEG was charging, we found that it is much easier to submerge the non-electrical inserts into the saline solution.
Once it charged, we spent a couple minutes looking for the USB dongle, and finally found one that works, despite being labeled for the other headset.
I then loaded up the 3D brain activity map, and saw that most connections were good.
Here are my brain waves as seen through the standard chart interface.
I then opened up the new Brain Visualizer and looked at my brainwaves as they occurred inside my head.
I think the Visualizer has about a one second delay.
I am now able to turn the model around and see what areas of the brain have more activity than others, along with colors signifying the specific type of wave based on the frequency.
Also, the model shows the EEG, along with colors signifying the status of the connection.
From this top view, The component brain structures can be seen.
However, activity is only recorded in the upper surface, as we use external sensors that cannot reach the deeper parts of the brain.
I then found that I can zoom into my own head using the touchscreen on my laptop, an amazing visual notion, as I can "drive around" my own mind, zooming in on specific parts of my brain as they experience activity.
Hannah And Luigi then tried to stimulate activity by annoying me with the nyan cat song, which worked initially until I tuned it out, causing the brain activity to return to normal levels.
Hannah then played Beethoven's 9th Symphony (arguably one of the greatest pieces of music ever created in my opinion, except for Mozart's Requiem), and my brain waves pulsed sort of in tune with the beat.
The activity greatly increased from when I was just looking to when I visualized the music.
Here I got the connections to be green again and am continuing to dive into my head, watching specific sections light up as I turn and zoom through the model.
Here is a picture of me wearing the EEG headset along with my brain being recorded and modeled in real time.
Even in a couple seconds, The brain waves change as I watch it happen.
This technology greatly increases the amount of data that can by analyzed instantly by introducing an intuitive visual means of seeing location along with frequency and time, adding more dimensions to the chart interface. In relation to my project specifically, this software allows me to tune out unnecessary frequencies and simply see which part of the brain lights up as people attempt to solve puzzles.

0 comments

It works now (1/25/2014)

Never mind I got the 3D Brain map to work by simply updating my OS from windows 8 to windows 8.1.
I have no idea how this worked and I give up trying to understand why.
Hopefully it will remain functioning.


0 comments

Got the EEG to connect to my computer (1/17/2014)

I connected the EEG to my computer for the first time, and despite having lots of hair was able to get a good connection.


(All were green besides P8!)

I then put it on and tried to make the graphs change.
At first, the changes were chaotic but I found out eventually that this was because I moved my face muscles too much, thus moving the sensors.
After playing around with the settings and the brainwave chart, I was able to see changes in amplitude and frequency when I sat still and blinked.


(The two peaks that echo from all the brain waves)



0 comments

Software issues (1/21-23/2014)

Despite having the 3D brain map working earlier, It is now not working.


Above is the error message I am getting.

I tried re-installing the developer edition EDK, but it did not help.
After looking at the help forums, I saw that the common cause of this issue is not having the Research edition installed.
I then re-installed the Research edition, after trying the numerous serial codes.
Finally, it accepted the code and installed.
I then opened all the programs and restarted my computer, but it didn't work, and I am getting the identical error message.
I must find a new solution.

0 comments