Installing Arch Linux in the Morken 212A Lab

This took a while to document since I spread the installation out over three days (Monday, Tuesday, and Thursday specifically). The first few hours were spent trying to figure out why the
LiveUSB's Arch Linux installer wouldn't boot. After inspecting the specs for all of the desktops, turns out it was just the computers' fault. The rest of the time was spent waiting for the packages to finish installing. The reason why we installed Arch Linux on these computers was:

  1. Ready access to the software we need (Arduino SDK and several dependencies for the Emotiv EPOC SDK) through the AUR
  2. phora has experience with Arch Linux packaging since she has an Arch Linux desktop
  3. It is easy to install only the packages needed

And the packages and packages we installed on the machines were:

  • base{,-devel} - Required for all Arch Linux machines. base-devel is an assumed prerequisite for using the AUR.
  • kdebase - Easy to set up, ultra-customizable GUI
  • kdegraphics-{gwenview,okular} - Basic needs (image viewing, pdf viewing)
  • gimp - For editing our concept art
  • dia - For editing our concept diagrams
  • android-sdk{,-platform-tools} - In case we need to develop any android applications
  • eclipse-android - In case we need to develop any android applications
  • arduino - To start programming basic functionality in the robot
  • qtcurve - To keep all GUI widget themes consistent (this is important for the accessibility problems mentioned in Emotiv EPOC SDK Experiments). Also makes changing the color and styling without much technical knowhow.
  • kde-gtk-config - To keep all GUI widget themes consistent (this is important for the accessibility problems mentioned in Emotiv EPOC SDK Experiments). Also makes changing the color and styling without much technical knowhow.
  • firefox - Basic needs (browsing)

In other news, an email just came in on 2014-11-07 stating that there was an unexpected turn of circumstances that provided. I applied for the refund, especially since we need the tests to be as early as possible. This will mean we have to look at alternate hardware while waiting for it to come in and/or develop any demo code for the EEG device earlier than expected (since the shipments are still applicable even after the refunds).

Advisor Meeting 2

We met with our advisor today from 10:20-10:50 and reported our progress. Here were the major points of the meeting:

Joan

  • Had to work on the Requirements Document mostly on own during the weekend (and likely for next week too).
  • Concerned about how the use cases mention AndroidEEG
    • Draw the storyboards for the use cases without the Android app for the presentation. Including the Android app in the storyboards gives the illusion that Android middleman software will be essential to the functioning of the program.
    • Using the 1Sheeld may reduce the amount of Android code needed.
    • Keep it as optional in case the middleman software makes it easier to write the code.
    • Write the use cases to optionally communicate with the 1sheeld for debugging purposes.
  • Found the data faker program in the Emotiv Epoc SDK (the binary's EmoComposer). Further comments about EmoComposer are in the experiments post.
  • Can the data faker program be hooked up to the robot? That would be handy to debug the robot in case the headset readings get funky.
  • Set up the presentation permissions for our advisor ASAP.
    • Meet with the advisor again (likely on own) at approximately 09:30 on 2014-10-28
    • Since we'll need a decent draft of our presentation to show to our advisor, pretend that the presentation is due on Tuesday.
    • Further comments on the presentation will be emailed.
  • The theme for the website could be updated. It looks a little plain compared to my personal site.
  • Start learning about how heuristic functions are used in AI.
  • Look at the Numenta NUPIC code.
  • I should probably enable some form of commenting system on the blog. Will do so now.

Drew

  • Unavailable for the meeting due to external circumstances.
  • Checked up on the robot parts that came in the mail just in case anything happened to them.
  • Ordered an LED(?) screen to get the free shipping. May come in handy later.

Emotiv EPOC SDK Experiments

So I started playing around with the SDK and the SDK's provided tools. I've kept some screenshots to highlight some of the usability problems I encountered. The first two screenshots highlight aesthetic usability issues in EmoComposer since it tries to stick out with its widgets. The same could be said for the EmotivControlPanel, which also sticks out due to the custom styling on the buttons and other widgets. While I do have experience with writing Qt Style Sheets and the EmotivControlPanel even supports loading a custom stylesheet, this functionality is useless without documenting the widgets used to construct the interface. I'd recommend the Emotiv developers document what widgets the users need to write the stylesheets for for their SDK tools and/or remove all custom styling except for the custom widgets should they stick with a proprietary model. Personally, I'd prefer a combination of both so the Emotiv developers can keep their custom styling with a stylesheet, but allow the users to turn this off in case they can't easily read the interface.

Other than that, the EmotivControlPanel works just as expected when tested with the EmoComposer, even though I frequently confused the "Push Skill" slider with carrying out the "Push" outside of training mode for the Affectiv Suite. The documentation does not mention that the EmoComposer can't save user profiles to it. Trying to do so will cause the EmotivControlPanel to freeze, requiring the user to manually kill the application.

Surprisingly, the EmoKey and the TestBench software don't stick out as much. The only issues I encountered with it were not triggering when the blink button was pressed in EmoComposer, not fully closing even after "Quit" is clicked on the system tray (which again, requires the application to be killed manually), and not launching keybindings such as Alt+F3 (which brings up the window management menu for me) and Ctrl+Space (which launches Kupfer). Even with the QListView's row coloring, the custom colors are done correctly since it defines both a foreground color and a background color. This makes the software more accessible to people with unusual color schemes.

I haven't been able to test the TestBench software since it has no ability to connect with the EmoComposer. Hopefully this'll change in the future, but I don't know what changes will be necessary to do this. At least it closes properly when Application → Quit is selected.

I will also be documenting any experiences I have with Emotiv's developers in the future. So far, the support's been pretty responsive.

Group Meeting 1

We met today to get started on our requirements presentation and revising our requirements document. On top of needing to add class diagrams and sequence diagrams to it, we also noticed there were some use cases we should document. The most notable case was how we would let the user differentiate between a weak signal and confused input. We are also going to add drawings of each of our use cases to help clarify them.

So far, the presentation's mostly outlined. We'll get to filling in the content for the presentation over the weekend (including rudimentary storyboards for our presentations that'll likely be in *.xcf or *.svg format) and adding more documentation on our experiments with the Emotiv SDK.

Ordering Robot Parts

On Tuesday we ordered the first set of parts for the robot from SparkFun Electronics, enough to get the robot moving. The only necessary component that we didn't purchase was an Arduino microcontroller, which we already have in our possession. Along with this board we went with a fairly basic robot design favoring simplicity when possible. The order consisted of:
* 1 Multi-Chassis - 4WD Kit (Basic) - A rectangular chassis that has four independently-motorized wheels
* 2 Motor Driver 1A Dual TB6612FNG - The intermediaries between the microcontroller (essentially the robot's brain) and the motors
* 1 5 Pack of 5mm Addressable RGB LEDs - These will be used for debugging and user-feedback, different colors representing different messages
* 1 Graphic LCD 128x64 - Mostly just tacked this on to get free shipping, but it could wind up in the project if we want another form of output

We're excited to get working on the hardware, but first up is preparing for our requirements presentation on next Thursday!

Advisor Meeting 1

We met with our advisor today from 10:20-10:50 and reported our progress. Here were the major points of the meeting:

Joan

  • Packaged the Fedora version of the Research SDK for Arch Linux.
  • Found the most of the dependencies for the SDK. Still don't know what package provides libmkl_rt.
  • Emailed the support team about the packaging of the SDK so the developer team could fix this.
  • There's a fake data generator in the SDK lite for Windows and Mac (That's free!). So try to find the equivalent in the Linux SDK version.
  • Finish work on not referencing the static copies of the library when possible.
  • Found a bug in nikola\plugins\task\sitemap__init__.py. It wasn't checking if directories were directories in a cross-platform manner. Will report this to the developers with an official patch.

Drew

  • Order the parts from SparkFun.
  • Chassis has built in motors.
  • We may need a separate shield for driving the wheels?
  • Probably going to drive the motors with DC.
  • Settled on multicolor LEDs because single color LEDs can get confusing for debugging.

Installing the Emotiv SDK

So we ordered the device this Saturday after the scholarship money came in... which'll take about two weeks from today. In the mean time, Joan downloaded the SDK to see how it's installed. Judging by a certain forum post on Emotiv's forums, I wasn't the only one shocked by how they provided the packaging. They provided an installer that downloads an installer to do the actual installer in an unsafe manner. Thankfully I knew enough about packaging to wrap it up in a much saner manner. You can find the PKGBUILD here.

On my list of things to do is to minimize the references to extra copies of the libraries as much as possible. Second on my list of things to do is to make a proper *.spec file for Fedora installs.

Requirements Document Meeting Logs

Meeting Notes

Here were the major points from our meeting with our supervisor, Dr. Michael Turi, and our capstone instructor, Dr. George Hauser:

With Dr. Turi

  • The description part of the requirements document is similar to the goals section of the proposal.
  • The tasks we laid out should have a more pessimistic outlook.
  • The use case template we pasted into the document is fine. It might not hurt to include situations like, "What if the signal's fuzzy?"

With Dr. Hauser

  • Due to the large amount of unknowns with our project, the requirements might still be fuzzy. But that's okay since it's a draft.
  • It'd be neat to get the equipment before the presentation for the requirements document.
  • The class project web page should be in the introduction or in a special section near the end.

Document work summary

After that, we continued working on the requirements document. Most of the work involved explaining requirements more technically and splitting up the tasks wherever they can go wrong. However, we're still a bit nervous that we may've not made the educational part of the description with the right amount of detail.

Furthermore, the amount of technicality for the rest of the goals might've either been too much (in cases where we knew how to expand it) or too little (in cases where we're still not sure how to deal with it).

Setting up Nikola on PLU computers

We set up our site for our github page last night using a Linux computer that already had nikola installed. Since we also needed to work on the site at school, we also had to set nikola up on school computers. To do this, we had to run the following commands:

  1. Install most of the dependencies using pip
    \Python34\Scripts\pip install --root=/path/to/destdir nikola
  2. Install nikola using the github version
  3. Edit C:\Python34\Lib\distutils\msvc9compiler.py at line 294 to hardcode 12.0 as the MSVC version
  4. Remove all instances of -mcygwin from C:\Python34\Lib\distutils\cygwinccompiler.py
  5. Install lxml using the official instructions for MS Windows and specify the path to install like in step 1.
  6. Install blinker, six, markupsafe, logbook, natsort, yapsy, and pyrss2gen manually with pip
  7. Construct a shell script file to automagically set environment variables to make manually copying nikola to a location where the executable can be run a lot less painful (as in, no need to type out extremely long path names):
if test -d /cygdrive;then
    #export PYPREFIX="/cygdrive/x/Apps/pylibs"
    export PYPREFIX="/cygdrive/c/Users/PLUCSCE/Downloads/pylibs"
else
    #export PYPREFIX="/x/Apps/pylibs"
    export PYPREFIX="/c/Users/PLUCSCE/Downloads/pylibs"
fi

export PYTHONPATH="${PYPREFIX}/Python34:${PYPREFIX}/Python34/lib:${PYPREFIX}/Python34/lib/site-packages"
export PATH="${PYPREFIX}/Python34/Scripts:${PATH}"

With the installation complete, all we had to do was keep a copy on our x drives before copying it to the Downloads folder and source the script file before continuing to work with our site.