Playing Audio over Bluetooth on Rasbperry Pi (Using Bluealsa, Command Line)

In many situations (connecting remotely to the pi, programmatically playing audio), it is necessary to have command line interface (CLI) options that enable you setup and play audio via bluetooth. This post covers the process of connecting to a bluetooth device (speaker) and using the bluealsa library to play audio via command line. The post also includes a sample on how play audio over bluetooth devices  a Nodejs app using the sound-player nodejs library.

Step 1: Install Bluealsa

Bluealsa is a direct integration between the Bluez (an implementation of the Bluetooth protocol stack.) and the alsa sound library. Previously, this was done using PulseAudio. However bluez-alsa promises better ..

The current status quo is, that in order to stream audio from/to a Bluetooth device, one has to install PulseAudio, or use Bluez < 5. However, Bluez version 4 is considered to be deprecated, so the only reasonable way to achieve this goal is to install PulseAudio.

With this application (later named as BlueALSA), one can achieve the same goal as with PulseAudio, but with less dependencies and more bare-metal-like. BlueALSA registers all known Bluetooth audio profiles in Bluez, so in theory every Bluetooth device (with audio capabilities) can be connected. In order to access the audio stream, one has to connect to the ALSA PCM device called bluealsa. The device is based on the ALSA software PCM plugin.

To install bluez-also, use

sudo apt-get install bluealsa 

if interested in building bluealsa from source, please see the project github page.

Note: If you are running the recent version of the Raspberry Pi OS – Stretch, it already comes with bluealsa installed.

Step 2: Connect Your Bluetooth Device (Speaker, mic etc)

To do this, the linux bluetooth control CLI tool (bluetoothctl) is used to scan for devices (get MAC address), pair and then connect to them.

bluetoothctl  // start the bluetooth control tool

power on // turn on your Pi bluetooth interface if off

agent on // turn on the default bluetooth agent

scan on   // scan for all nearby bluetooth device. Your device should be discoverable and turned. Note your devices mac address

pair XX:XX:XX:XX:XX   // You should see pairing successful

connect XX:XX:XX:XX:XX // You should see connection successful

Step3: Play Audio.

Once step 2 is completed, your bluetooth device should now be available  to bluealsa as a virtual PCM device. What you need to know now is the right deviceid which can be passed as a parameter to sound libraries such as aplay, afplay, mpg321 etc. Your deviceid is given as:


XX:XX:XX:XX:XX:XX represents your bluetooth device mac address.

To play an audio file using aplay, simply supply the device id

aplay -D bluealsa:HCI=hci0,DEV=XX:XX:XX:XX:XX:XX,PROFILE=a2dp /usr/share/sounds/alsa/Front_Center.wav

Similarly, to play an audio file using mpg321,

mpg321 -a bluealsa:HCI=hci0,DEV=XX:XX:XX:XX:XX:XX,PROFILE=a2dp /usr/share/sounds/alsa/Front_Center.wav

Note: Given your bluetooth device is a virtual device, it will not be listed when you run aplay -l or similar listing commands.

Bonus: Recording Audio

To record audio via your bluetooth device .. if it has a microphone is also fairly easy. Again, you update your device id parameter. However, rather than using a2dp as profile, you use sco.


arecord -D bluealsa:HCI=hci0,DEV=XX:XX:XX:XX:XX:XX,PROFILE=sco test.wav

(I have had some difficulty with audio recording where the recorded file just contain nothing ..:( … might be my headset)

A Nodejs Implementation

A simple example to play audio over bluetooth is using the sound-player nodejs library. Simply set the device parameter to your bluealsa virtual pcm deviceid.

var soundplayer = require("sound-player")
var options = {
filename: "preview.wav",
gain: 10,
debug: true,
player: 'aplay',
device: 'bluealsa:HCI=hci0,DEV=XX:XX:XX:XX:XX:XX,PROFILE=a2dp'

var player = new soundplayer(options);

player.on('error', function(err) {
console.log('Error occurred:', err);


Final Note: Bluealsa and PulseAudio do not play well together.

You will have to completely uninstall PulseAudio and all its baggage in order to use bluealsa.

Due to BlueZ limitations, it seems, that it is not possible to use BlueALSA and PulseAudio to handle Bluetooth audio together. BlueZ can not handle more than one application which registers audio profile in the Bluetooth stack. However, it is possible to run BlueALSA and PulseAudio alongside, but Bluetooth support has to be disabled in the PulseAudio. Any Bluetooth related module has to be unloaded

However, from my tests, playing audio over bluetooth using bluealsa has been a  easier/faster to setup and more stable compared to using pulseaudio.

This post and the above code snippet was tested on a Raspberry Pi 3 running Stretch.


Posted in Programming | Tagged , , , , | 9 Comments

Build a Waving Robot using Watson Services

So, building on the previous tutorial on identifying peaks in a sound file, I have integrated into my TJBot robot. Essentially the robot plays a song and waves its arm (driven by a servo motor) in tandem with the peaks in the song.

Detailed instructions can be found on instructables. Full code can be found on Github.

Interested in TJBot?

IBM TJBot is a DIY kit that allows you to build your own programmable cardboard robot powered by IBM Watson Services. It consists of a cardboard cutout (which can be 3D printed or laser cut), Raspberry Pi and a variety of add-ons – including a RGB LED light, a microphone, a servo motor, and a camera. Learn more at

Posted in Research, Tutorials | Tagged , , , | Leave a comment

Detect beats, extract amplitude data from an audio file, using Nodejs


Fork on Github

The Problem: Code my TJBot robot such that it listens to and enjoys good music. For a robot to actually enjoy music, it needs to .. well .. become aware of the beats (usually peaks) in said song! (and then react to them). Update … here is my robot waving/dancing to song.

The general theory is simple – convert the sound file into an array of continuous signals, identify signals that occur above a set peak threshold and voila … you now know where the beats are!

In my search, I came across a really helpful blog post about beat detection (basically identify the peaks above a given threshhold, computing their frequency of occurrence and estimating the overall beats/timing of the song.). For consistency with other applications, I needed my app to work with Node js, so the primary task was finding an appropriate library to assist with the task of decoding an audio file. The post above relies on the html5 web audio standard which is not exactly supported with nodejs. However, some good samaritans have started the amazing work of creating a nodejs version (web-audio-api), and that’s what I finally used! Continue reading

Posted in Programming, Research, Tutorials, Uncategorized | Tagged , , , | Leave a comment

Introducing TJBot – An open source maker kit connected to Watson Services.


So, the past few days (months actually) have been spent working to prepare for the Watson Developer Conference where a really special project was unveiled – TJBot. I have had the incredible good fortune to have worked as technical lead (software) in creating TJBot and we are all super excited to share this project with the entire open source community. The project is the brainchild of my colleague Maryam Ashoori, and we worked with an amazing industrial designer colleague of ours (Aaron) who helped design TJ!

 TJ Bot is an open source project designed to help you access Watson Services in a fun way.You can 3D print it or laser cut it, then use one of its recipes to bring mite to life!#TJBot

TJBot can be laser cut from cardboard/chipboard (designs are open source and can be downloaded here). You can also 3D print it (download 3D files here). On the inside, TJ has

  • > A raspberry Pi 3
  • > A USB microphone,
  • > A raspberry pi camera in its left eye,
  • > An RGB LED on its head and
  • > A bluetooth speaker.

See the video render below to get an idea of how TJ is assembled from a laser cut cardboard.

Continue reading

Posted in Congnitive Services, Research | Tagged , , | 1 Comment

Why I’m excited about the upcoming Watson Developer Conference!


I’ll be attending Watson Developer Conference 2016 Yay!!

Developer conferences have always been an interesting experience for me! Having  attended (and given short talks) at conferences such as Samsung Tizen DevCon, Blackberry Devcon, Google developer conferences etc, the most exciting aspect for me has always been the hands on dev sessions. Learning new technology, getting support from developer evangelists, tackling bugs, learning new design patterns …. and winning trophies at Hackathons! For the upcoming Watson Developer Conference, I’m equally stoked about the learning opportunities that lie ahead –  learning about Watson services, getting answers to some open questions and all the other extras.

Work directly with industry-leading experts and learn from your peers. The schedule is packed with technical talks, hands-on labs and coding challenges to get you working with the tools that will make you a sought-after developer.

Continue reading

Posted in Developer Tips, General Thoughts, Uncategorized | Tagged , , | Leave a comment

My Experience with Assets 2016 Conference


This year I had the opportunity to attend the assets2016 conference in Reno, Nevada and it was an excellent learning experience. For clarity, ASSETS stands for ACM SIGACCESS Conference on Computers and Accessibility.

The ASSETS conference is the premier computing research conference exploring the design, evaluation, and use of computing and information technologies to benefit people with disabilities and older adults.

Accessibility is a field of growing interest for me, and while I have previously had the opportunity to do some research related to wearable interventions for individuals with cognitive impairments, this was my first opportunity to truly engage with the community.  At this conference, I got the chance to deliver a poster presentation about my work on a smartwatch app for individuals with attention deficiency and anxiety. I did get some really great feedback, and the pleasure was all mine connecting with this awesome community! Below are highlights on my experience attending the conference. Continue reading

Posted in Research, Uncategorized | Tagged , , , | Leave a comment

Fan Theory – How does Westeros Learn about Jon Snows Heritage (Hint: Fire)


So, at the end of Game of Thrones Season 6, it is revealed (via the 3 eyed raven) that Jon Snow is actually NOT Ned Stark’s son/bastard as we have previously believed. Jon Snow is the son of Rhaegar Targaryen (brother to Daenerys Targaryen) and Lyanna Stark (sister to Ned Stark). Wow! What a twist! But here is the problem .. only the previous 3-eyed raven (the old guy in the tree who was killed by the Whitewalker King), and the current 3 – eyed raven (our beloved Brandon Stark!) are privy to this important piece of information.

The rest of Westeros DO NOT know John Snows True Heritage

So, this post is my thoughts/theory on how the rest of Westeros come to learn of this … if they do at all. First things first … Continue reading

Posted in Movies, Reflections | Tagged | Leave a comment

Using a Hash to Remove Duplicates in Mongoose, MongoDB – Aggregate Exceeded Document Size Work Around/Fix


One way that has been proposed for removing duplicates in MongoDB is to use the MongoDb aggregate function. Its a straightforward process in which 1.) You specify the criteria for comparison (i.e. the field you want to match in order to determine a duplicate) 2.) You group these duplicates (where each record belongs to only one group) 3.) Now you know these duplicates, you weed out the offenders – keep the first element in each duplicate group and delete the others. See this excerpt from an aggregate solution on Stackoverflow. When developing Nodejs apps that need MongoDB interaction, I usually use the Mongoose library, an elegant MongoDB object modeling for Node.js . This post addresses an alternate (and naive) approach to removing duplicates where situations (memory issues) make the aggregate option non-viable.

var duplicates = [];

  // discard selection criteria, You can remove "$match" section if you want
  { $match: { 
    source_references.key: { "$ne": '' }  
  { $group: { 
    _id: { source_references.key: "$source_references.key"}, // can be grouped on multiple properties 
    dups: { "$addToSet": "$_id" }, 
    count: { "$sum": 1 } 
  { $match: { 
    count: { "$gt": 1 }    // Duplicates considered as count greater than one
])               // You can display result until this and check duplicates 
// If your result getting response in "result" then use else don't use ".result" in query    
.forEach(function(doc) {
    doc.dups.shift();      // First element skipped for deleting
    doc.dups.forEach( function(dupId){ 
        duplicates.push(dupId);   // Getting all duplicate ids

// If you want to Check all "_id" which you are deleting else print statement not needed

// Remove all duplicates in one go    

When aggregate doesnt work – Maximum Document Size Exceeded

exception: aggregation result exceeds maximum document size (16MB)

When your dataset is large (e.g millions of records), or the fields you use for duplicate comparisons are text heavy (in my case … these were email fields – sender, receiver, body, timestamp), you can run into the MongoDb maximum document size exceeded error. A work around for this (when writing native mongodb queries) is to specify the diskuse option to true which allows mongodb write to temporary files to manage memory use. However, mongoose does not seem to support this option well and even after setting diskuse to true, I still have the maximum document size exceeded error. Note that you can still run this aggregation using native mongo queries run on the mongodb terminal.

A Naive Work Around – Using Hashes.

The alternative approach I propose here is naive,  runs in Linear time, but requires multiple steps. For a process like this which will run only once, in my current application, I can hazard the multiple steps.

Step 1: Compute a hash on your comparison fields. Preferably add this as a field in you db. I used a nodejs md5 hash libary to compute a new database attribute.

 var hash = md5.create();
 hash.update(record.from + record.sent_at + + record.body); // concatenate comparison fields and compute hash
 var hashhex = hash.hex() ; 

// Update your db, add hashhex as an attribute for each record

Note: You will need to add the hash to you db records either while you are creating your dataset or write an update script to insert a new hash field.

Step 2: Aggregate based on your single hash field (much lighter and shouldn’t result in the maximum document size error). In my case, turned out to be a fairly fast operation. After aggregation based on the hash field, you can generate a list of duplicate ids which will subsequently be removed.

var duplicates = []; store duplicate ids
    {$group : {
      _id : {hash: "$hash"} ,
      dups: { "$addToSet": "$_id" },
      count : { $sum : 1 }
    }  },
    { $match: { count: { $gt: 1 } }  } ])
    .exec(function(err, data) {
      if (err) {
        throw err;
      }else { 
            data.forEach(function(doc) {
            doc.dups.shift();      // First element skipped for deleting
            doc.dups.forEach( function(dupId){
              duplicates.push(dupId);   // Getting all duplicate ids
              //console.log("pushing id ", dupId);

Step 3: Delete your duplicates.

db.collectionName.remove({_id:{$in:duplicates}}, function (err, count) { console.log( " done removing duplicates ")});

All done! Was this approach useful for your particular scenario ?

Posted in Programming | Tagged , , , , | Leave a comment

D3.js version 4.x – Examples and Changes from version 3.x


Recently, I’m spending some time to learn more about data visualization and have decided to learn d3.js, starting from the basics. I’m working through Scott Murray‘s book and a few other tutorials created by Mike Bostock (creator of d3 … incredible feat!). D3 is hugely successful visualization library and is lucky to still be in active development. This means things change fast …. and many of the tutorials written using version 3.x ( most tutorials online are done with version 3.x, including the current version of Scott’s book), and do need some tweaking to work well. This post looks at some of those changes. Continue reading

Posted in Programming, Tutorials | Tagged , , , | 2 Comments

Wrong HDMI Color (Pink and Green Color Distortion) on Raspberry Pi

So recently I had some quick prototyping to do on a Rapsberry Pi, and after connecting the HDMI cable, the colors looked washed out. Areas that should be white were pink, and areas that should be black were green! Didn’t make for good viewing!

The Fix

As suggested by several online sources, the key to fixing this is to edit the /home/boot.txt file. Many suggest you boost the signal being sent from the Pi to the HDMI adapter by uncommenting or adding the line


I tried this, and this did not work. What finally helped was adding the following lines to boot.txt.

sdtv_mode = 2
hdmi_drive = 2

And ofcourse, I also updated the Pi

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install alsa-base alsa-utils

Best of luck!

Posted in Programming | Tagged | Leave a comment