Wifi on the Command Line on a Raspberry Pi

I hate hooking a monitor up to my Raspberry Pi. It feels wrong. It feels like I should be able to do everything from the command line, and the fact is I can.

If you’re pulling your Raspberry Pi out of the box and are interested in bootstrapping without a monitor, check out my other post on Easy and Offline Connection to your Raspberry Pi.

Afterward, you may want to set up your wifi access - that is, you want to tell your pi about the wireless access points at your home, your coffee shop, or whatever.

Doing that from the command line is pretty easy, so this will be short.

You’re going to be using a utility on Raspbian called wpa_cli. This handles wireless configuration and writes its configuration into /etc/wpa_supplicant/wpa_supplicant.conf. You could even just edit that file directly, but now we’re talking crazy talk. Actually, I do that sometimes, but whatever.

First, run…

wpa_cli status

…to see what the current status is. If you get Failed to connect to non-global ctrl_ifname: (null) error: No such file or directory, that’s just a ridiculously cryptic error message that means you don’t have a wifi dongle. Why they couldn’t just say “you don’t have a wifi dongle” I don’t know, but whatever.

If you do have a wifi dongle, you’ll instead see something like…

Yay! You have a wireless adapter, which means you likely have a wifi dongle plugged into a USB port. It says here that the current state is INACTIVE. That’s because you’re not connected to any access points.

To do so, you need to run scan, but at this point, you may want to enter the wpa_cli interactive mode. That means that you don’t have to keep prefixing your commands with wpa_cli, but can instead just type the commands. To enter interactive mode, just do…

wpa_cli

To get out at any time just type quit <enter>.

Now do a scan using…

scan

It’s funny, because it appears that nothing happened, but it did. Use…

scan_results

…to see what it found.

This scanning step is not necessary, by the way, there’s a good chance you already know the name (SSID) of your access point, and in that case you don’t need to do this.

Next you create a new network using…

add_network

You’ll get an integer in return. If it’s your first network, you’ll get a 0. That’s the ID of the new network you just created, and you’ll use it on these subsequent commands.

To configure your network do this…

set_network 0 ssid "mynetwork"
set_network 0 psk "mypassword"

Something I read online said that as soon as you enter this, it would start connecting, but I had to also do this to get it to connect…

select_network 0

Now there’s one more thing. If you’re like me, you don’t just connect to a single AP. I connect from home, my mifi, my local coffee shop, from work, etc. I want my pi to be able to connect from any and all of those networks.

Adding more networks is as easy as following the instructions above multiple times, but you want to set one more network property - the priority. The priority property takes an integer value and higher numbers are higher priority. That means that if I have network1 (priority 1) and network2 (priority 2), and when my pi boots it sees both of those networks, it’s going to choose to connect to network2 first because it has the higher priority.

Okay, that does it.

If you want to see everything I’ve written about the Raspberry Pi, check out codefoster.com/pi

Easy and Offline Connection to your Raspberry Pi

Getting a Raspberry Pi online is really easy if you have an HDMI monitor, keyboard, and mouse.

Subsequently getting an SSH connection to your pi is easy if you have a home router with internet access that you’re both (your PC and your pi) connected to.

But let’s say you’re on an airplane and you pull your Raspberry Pi out of its box and you want to get set up. We call that provisioning. How would you do that?

I’ll propose my method.

First, you need to plug your pi into your PC using an ethernet cable. If you’re a technologist of old like I am, you may be rummaging through your stash for a crossover cable at this point. It turns out that’s not necessary though. I was pretty interested to discover that modern networking hardware has auto-detection that is able to determine that you have a network adapter plugged directly into another network adapter and crosses it over for you. This means I only have to carry one ethernet cable in my go bag. Nice.

If you put a new OS image on your pi and boot it up, it already detects and supports the ethernet NIC, so it should get connected and get an IP automatically.

Here comes the seemingly difficult part. How do you determine what the IP address of your pi is if you don’t have a screen?

The great thing is that the pi will tell you if you know how to listen.

The means by which you listen is called mDNS. mDNS (Multicast DNS) resolves host names to IP addresses within small networks that do not have a local name server. You may also hear mDNS called zero configuration and Apple implemented it and felt compelled (as they tend to) to rename it - they call it Bonjour.

This service is included by default on the Raspberry Pi’s base build of Raspbian, and what it means is that out of the box, the pi is broadcasting its IP address.

To access it, however, you also need mDNS installed on your system. The easiest way I am aware of to do this is to download and install Apple’s Bonjour Print Services for Windows. I’m not certain, but I believe if you have a Mac this service is already there.

Once you have mDNS capability, you simply…

ping raspberrypi.local -4

The name raspberrypi is there because that’s the default hostname of a Raspberry Pi. I like to change the hostname of my devices so I can distinguish one from another, but out of the box, your pi will be called raspberrypi. The .local is there because that’s the way mDNS works. And finally, the -4 is an argument that specifically requests the IPv4 address.

If everything works as expected you’ll see something like…

Again, my pi has been renamed to cfpi1, but yours should be called raspberrypi if it’s new.

My system uses 192.168.1.X addresses for my wireless adapter and 169.254.X.X for my ethernet adapter.

So that’s the information I needed. I can now SSH to the device using…

ssh pi@169.254.187.84

I could just use ssh pi@raspberrypi.local to remote to it, but I’ve found that continuing to force this local name resolution comes with a little time cost, so it’s sometimes significantly faster to hit the IP address directly. I only use the mDNS to discover the IP and then I use the IP after that.

Provisioning a Raspberry Pi usually includes a number of system configuration steps too. You need to connect it to wireless, set the locale and keyboard language, and maybe turn on services like the camera. If you’re used to doing this through the Raspbian Configuration in XWindows, fear not. You can also do this from the command line using…

sudo raspi-configuration

Most everything you need is in there.

You may also be wanting to tell your pi about your wifi router so it’s able to connect to via wireless the next time you boot up. For that, check out my post at codefoster.com/pi-wifi. Actually, if you’re playing a lot with the Raspberry Pi, you might want to visit codefoster.com/pi and see all of the posts I’ve written on the device.

Happy hacking!

The Most Basic Way to Access GPIO on a Raspberry Pi

​I’ve been hacking on the Raspberry Pi of late and wanted to share out some of the more interesting learnings.

I think people that love technology love understanding how things work. When I was a kid I took apart the family phone because I was compelled to see what was inside that made it tick. My brother didn’t care. If it made phone calls, he was fine with it. I had to understand.

Likewise, I knew that I could use a Node library and change the GPIO pin levels on my Raspberry Pi, but I wanted to understand how that worked.

In case you’re not familiar, GPIO stands for General Purpose Input/Output and is the feature of modern IoT boards that allows us to controls things like lights and read data from sensors. It’s a bank of pins that you can raise high (usually to something like 3.3V) or low (0V) to cause some electronic behavior to occur.

On an Intel Edison (another awesome IoT board), the platform developers decided to provide a C library with mappings to Node and Python. On the default Edison image, they provided a global node module that a developer could include in his project to access pins. The module, by the way, is called libmraa.

On a Raspberry Pi, it works differently. Instead of a code library, a Pi running Raspbian uses the Linux file system.

When you’re sitting at the terminal of your pi (either hooked up to a monitor and keyboard or ssh’ed in), try…

cd /sys/class/gpio

You’ll be taken to the base of the file system that they chose to give us for accessing GPIO.

The first thing to note is that this area is restricted to the root user. Bummer? Not quite. There’s a way around it.

The system has a function called exporting and unexporting. Yes, I know that unexport is not a real word, but alas I’m not the one that made this stuff up, and besides, who said Linux commands had to make sense?

To access a pin, you have to first export that pin. To later disallow access to that pin, you unexport it.

I had a hard time finding good documentation on this, but then I stumbled upon this znix.com page that describes it quite well. By the way, this page references “the kernel documentation,” but when I hit that link here’s what I get…

Oh well.

Now keep in mind that to follow these instructions you have to be root. You cannot simply sudo these commands. There is an alternative called gpio-admin that I’ll talk about in a second. If you want to just become root to do it this way, you do…

su root

If you get an error when you do that, you may need to first set a password for root using sudo passwd.

To export then, you do this…

echo <pin number> > /sys/class/gpio/export

And the pin number is the pin name - not the header number. So pin GPIO4 is on pin 7 on an RP2, and to export this you use the number 4.

When you do that, a virtual directory is created inside of /sys/class/gpio called gpio4, and that directory contains virtual files such as direction, value, edge, and active_low. These files don’t act like normal files, by the way. When you change the text inside one of these files, it actually does something - like perhaps the voltage level on a GPIO pin changes. Likewise, if a hardware sensor causes the voltage level on a pin to change, the content of one of these virtual files is going to change. So this becomes the means by which we communicate in both directions with our GPIO pins.

The easiest way, then, to read the value of the /sys/class/gpio/gpio4/value file is…

cat /sys/class/gpio/gpio4/value

Easy.

To write to the same file, you have to first make sure that it’s an out pin. That is, you have to make sure the pin is configured as an output pin. To do that, you change the virtual directionfile. Like this…

echo out > /sys/class/gpio/gpio17/direction

That’s a fancy (and quick) way to edit the contents of the file to have a new value of “out”. You could just use vior nanoto edit the file, but using echo and the direction operator (>) is quicker.

Once you have configured your pin as an output, you can change the value using…

echo 0 > /sys/class/gpio/gpio4/value //set the pin low (0V)
echo 1 > /sys/class/gpio/gpio4/value //set the pin high (3.3V)

Now that I’ve described the setting of the direction and the value, you should know that there’s a shortcut for doing both of those in one motion…

echo high > /sys/class/gpio/gpio4/direction

There’s more you can do including edge control and logic inversion, but I’m going to keep this post simple and let you read about that on the znix.com page.

Now, although it’s fun and satisfying to understand how this is implemented, and it might be fun to manipulate the pins using this method, you’ll most likely want to use a language library to control your pins in your app. Just know that the Python and Node libraries that change value are actually just wrappers around these file system calls.

For instance, let’s take a look at the pi-gpio.js file in the pi-gpio module…

write: function(pinNumber, value, callback) {
pinNumber = sanitizePinNumber(pinNumber);
value = !!value ? "1" : "0";
fs.writeFile(sysFsPath + "/gpio" + pinMapping[pinNumber] + "/value", value, "utf8", callback);
}

When you call the write() method in this library, it’s just calling the file system.

So there you have it. I hope you feel a little smarter.

Deploying TypeScript Projects to Azure from GitHub Using Continuous Deployment

I’m working on a fun project called Waterbug. You can peek or play at github.com/codefoster/waterbug.

Waterbug is an app that collects data as you row on a WaterRower and visualizes it in an Angular 2.0 app.

It’s a fun app because it uses a lot of modern stuff. Modern stuff is usually the fun stuff, and that’s why it’s always nice to be working on a greenfield project.

So, like I mentioned, one of the components of this app uses Angular 2.0. Angular is itself written in TypeScript, and you’re strongly encouraged to write your Angular 2.0 apps using TypeScript. You don’t have to, but at least in my opinion, you’d be crazy not to.

TypeScript is awesome.

TypeScript makes everything more terse, more elegant, and easier to read, and it allows your tooling (Visual Studio Code is my editor of choice) to reason about your code and thus help you out immensely.

The important thing to remember about TypeScript and the reason I think for it’s rapid uptake is that it’s not a different language that compiles to JavaScript. It’s a superset of JavaScript. That means you don’t throw any of your existing work away. You just start sprinkling in TypeScript where it benefits you. If you’re like me though, it won’t be long before you’re addicted to using it everywhere.

When you’re working on a TypeScript project, you write in .ts files and those get transpiled from .ts files to .js files.

Herein lies our first question.

Should we check those .js files (and also the .js.map files that are created by default) into our code repository (GitHub in my case)?

The answer is no.

The .js code is derivative and does not belong in source control. Source control is for source files. The .ts files are our source files in this case.

If you start checking your .js files into source control, you’re inevitably going to end up with .ts files and their associated .js files out of sync. Hair pulling will surely ensue.

I’ve gone one step further and determined that I don’t even want to look at my .js files in my editor.

In Visual Studio Code, I can go to File | Preferences | Workspace Settings, which opens (or creates if necessary) my projects .vscode\settings.json file. Then I can sprinkle in a little magic dust and tell Code that I’m not so concerned with .js and .js.map files and I’d just rather they not show up in my File Explorer pane or in my global search results.

Here’s the magic dust…

{
"files.exclude": {
"app/**/*.js": true,
"app/**/*.js.map": true
}
}

If, however, you don’t check your .js files into GitHub, then when you configure Azure to do continuous deployment from GitHub, it’s not going to pull in any .js files and that’s what your users’ browsers really need to make the site run.

So this is where some people say “Oh, blasted! I’ll just check my .js files in and call it done”.

True that works, but it also incurs technical debt. Don’t do it. It’s not worth it. Stick to your philosophical guns and don’t make choices like this. It may cost a little more up front to figure out the right way, but you’ll be glad later.

So, where and when should the .ts files get transpiled?

The answer is that they should get transpiled in Azure and it should happen each time there’s a deployment.

Now, let’s dig in and figure out how to do this.

If you do a little research, you’ll find that when you wire Azure up to look at GitHub, it does a pull of the code every time you push to the configured branch. Then it runs a default deployment script if you haven’t specified otherwise.

To run some code for each deployment, you simply customize this deployment script. You do that by adding two files to the root of your project: .deployment and deploy.cmd. You could just create these files manually, of course, but it’s better to generate them. That way you have the latest recommended default script and it specifically made for the type of application you’re running.

To generate the default deployment script, you first need to have the Azure Xplat CLI tool installed, which is a breeze. Just do npm install -g azure-cli. If you already have it and haven’t updated it for a while, then run npm up -g azure-cli.

After you have the azure-cli tool, you need to login to your Azure subscription. This is a lot easier than it used to be.

Simply type azure login. That will generate a little code for you and then ask you to go to a website, login, and enter your code. From that point forward, you’re able to access your Azure goodies from your command line. CLI FTW!

Once you get that, just go to the root of your website project (at the command line) and then run…

azure site deploymentscript --node

This will create the .deployment and deploy.cmd files.

Okay, now we just have to customize the deploy.cmd file a bit.

If your deployment script looks like mine, then there’s a part that looks like this…

:: 3\. Install npm packages
IF EXIST "%DEPLOYMENT_TARGET%\package.json" (
pushd "%DEPLOYMENT_TARGET%"
call :ExecuteCmd !NPM_CMD! install --production
IF !ERRORLEVEL! NEQ 0 goto error
popd
)

That script runs npm install to install your npm dependencies. It adds the --production flag to indicate that developer dependencies should be skipped since this is not a dev box - it’s the real deal!

Just after an npm install, you’re ready for the meat of the matter. It’s time to turn all of your .ts files into .js files.

To accomplish this, I added this just after step 3…

:: 4\. Compile TypeScript
echo Transpiling TypeScript in %DEPLOYMENT_TARGET%...call :ExecuteCmd node %DEPLOYMENT_TARGET%\node_modules\typescript\bin\tsc -p "%DEPLOYMENT_TARGET%"

The first line is obviously a comment.

The echo shows what’s going on in the console so you can find it in the log files and such.

The last line calls :ExecuteCmd (which is a function that comes with the default deployment script) and asks it to run TypeScript’s commandline compiler (tsc) using node and pointing it to the deployment target. The deployment target is the /site/wwwroot directory that contains your site. The command explicitly uses the tsc command that’s in the deployment target’s node_modules\typescript\bin folder. That should be there because we have typescript defined as one of the projects dependencies in the package.json. Therefore the npm install from a few lines up should have installed typescript. Another strategy would be to install typescript globally, but I opted for this method.

And that’s really all there is to it. I like to jump over to my SCM site (.scm.azurewebsites.net) and go to Debug Console | PowerShell to see the actual files on the site and make sure the .js files were generated.

If you look in the list of deployments in your Azure portal, you can actually double-click on the latest deployment and then click on View Log to see the console output that was captured when this deployment script ran…

In the log, you can see our echo and that the transpilation process has occurred. Don’t worry about the errors that are thrown. Those are expected and didn’t stop the process from completing.

On the New Mongo Capabilities in DocumentDB

On March 31, 2016 it was announced at //build and also by Stephen Baron via the DocumentDB blog that DocumentDB could now be used as the cloud data store for apps that already target MongoDB.

There’s a good video all about DocumentDB that came out of the recent //build event, and if you jump to 16:20 you’ll hear John Macintyre describe this new offering in good detail.

In this post, I’d like to break down what this means and why I think this is cool beans.

First of all, if you’re itching to get started, just check out how to join the preview program in the aforementioned blog post.

What does this mean in my own words? Keep in mind that my words tend not to contain a lot of technical speak. I have to keep things well organized in my mind if I’m to avoid insanity - an aspect of my personality that I’m hoping works to your when I record my thoughts in video or in this case in HTML.

I’ll start with what this is not. This is not a driver or an adapter. It’s not a package that you install that translates everything you do against Mongo into underlying calls to DocumentDB’s API.

That would be pretty cool, and I’m not certain that it didn’t already exist, but this is not that. The team decided on an approach that was lower level, more performant, and more compatible. They decided to essentially build MongoDB wire-level protocol compatibility into DocumentDB.

This is more performant because it doesn’t rely on any sort of adapter. It’s more compatible because it doesn’t care what tools, libraries, or techniques you use to talk to MongoDB today. Whatever strategy you use will inevitably result in MongoDB protocol compatible messages on the wire, and that’s going to work with DocumentDB.

I’d also like to attempt to position this against the open-source MongoDB code base that currently exists.

Is this Microsoft’s attempt to compete with Mongo? No way.

If anything, this is a recognition of the power and popularity of MongoDB.

DocumentDB’s support of this protocol doesn’t, in fact, do away with the need for MongoDB. DocumentDB is only a cloud service. You can’t install DocumentDB in a mobile app and run it offline. You can do that with MongoDB.

On the contrary, you use DocumentDB and this protocol when you already know MongoDB, but you want the many benefits of hosting your database in the cloud as a managed service - the primary advantages being scale and elasticity.

Take a look at this great article about the similarities and differences between MongoDB and DocumentDB.

This announcement appears to me to capture the strengths of these platforms without being forced to accept the shortcomings of either.

Make Git Wait for Code

There’s a decent chance that you, like me, ended up with Visual Studio Code incorrectly configured as Git’s core editor. I’m talking about Windows here.

Take a look at your .gitconfig file and see what you have configured. You will likely find that in c:\users\&lt;username&gt;.

Under the [core] section, look for the editor key. What do you have for a value?

If your Visual Studio Code path ends with code.cmd, then it’s not correct. It should end with code.exe. And it should have a -w flag. The -w flag tells the launching context to block until the process is terminated. That means that if you run a Git command from the command line that launches Code as a text editor, the command line should be blocked until you’re done editing that file and shut down Code.

Let’s say, for instance, that you have committed some files and then realize that you forgot one. You could commit it as a new commit, but it makes more sense to tack the change on to the last commit (assuming you haven’t pushed your commit up to a shared repo yet!).

To do this, you simply run git commit --amend at the command line. This amends your staged files to the last commit. It also launches your default text editor so you can determine if you want to keep the same commit message you elected previously or overwrite it.

This should open your text editor, wait for you to make and save your changes and then shut down your editor before releasing control of the command line and continuing on.

You can simply edit your .gitconfig file to add this configuration, but it’s easier to run this…

git config --global core.editor "'C:\Program Files (x86)\Microsoft VS Code\code.exe' -w"

…from your command line.

Hope this helps you like it did me. Credit goes to F Boucheros on this Stackoverflow post.

Developer Reactions to Build 2016

Microsoft takes opportunity every year at //build - its annual conference for developers - to make as many shock and awe announcements as it can, and this year in 2016, there was plenty of shock and plenty of awe.

Maybe you’ve watched all the keynotes already. Maybe you’ve even watched all of the sessions already. We’re going to assume, however, that even if you have seen or otherwise caught wind of the announcements that you would like to get an answer to the question “What does that mean for me?”
In this post, I’m going to invite a number of colleagues - all Microsoft Technical Evangelists - to share in detail via blog posts and videos about their favorite announcements, and what they mean for you - the developer.

This is an active blog post that will be updated as new content lands, so check back often.

We’ll start with a Channel 9 introduction to a few of the team. In this video you’ll meet…

Now, as promised, here’s the line-up of content from the evangelists you saw in the video and a few more. Topics will be filled in as we go and links will light up when they’re active.

James Sturtevant @aspenwilder

APRIL 11: My reaction to the news that Bash is on windows, the .NET Foundation gaining new members and what Service Fabric going GA means to developers.

read more

Adam Tuliper @adamtuliper

APRIL 13: Excited to get started developing for the HoloLens – even if you don’t own one yet? Join Adam for a tour of what the HoloLens can do, how to get started with the Unity bits for the HoloLens, and explore some of the powerful APIs to work with the HoloLens!

read more

Shahed Chowdhuri @shahedc

APRIL 15: Do you dream about publishing your own games on a major game console? Get caught up with the latest Xbox news from Build 2016 and hear about the different ways you can publish your very own game on Windows 10 and Xbox One. Use your own Xbox One console for development or apply for a dev kit via ID@Xbox. Harness the power of DirectX 12 and use a variety of tools to build your own games!

read more

Tim Reilly @timmyreilly

APRIL 18: Interested in what a Partner Evangelist pays attention to during build? Sertac Ozercan works with partners to bring their apps to Windows and shares his notes about changes to the store, chase-able tiles, and more.

read more

Sam Stokes @socalsam

APRIL 20: //Build brought new, awesome, stuff for Power BI. Power BI is powerful as is, so just what are the designers changing? In this video I will cover the super cool things that have changed in Power BI to make it an even more powerful tool then it already is. Is BI really open source? How about a no-code app for Apple devices or Android? What if you need everyone who is using your Power BI dashboards? Embedded Power BI, isn’t what you think it is. Watch this video and catch the excitement of Power BI!

read more

Jennifer Marsman @jennifermarsman

APRIL 22: Jennifer Marsman fills you in on the machine learning announcements from Build 2016. We announced the Microsoft Bot Framework and showcased the Microsoft Cognitive Services (formerly Project Oxford) for adding intelligence to your applications. We’ll discuss the fun Project Murphy bot and the inspiring Seeing AI story.

read more

Brian Sherwin @bsherwin

APRIL 25: Coverage of IoT and Office 365 announcements and resources to follow up on.

read more

Nick Landry @activenick

APRIL 26: We are moving from a world of data and apps, to a new exciting world of conversations with personal digital assistants and bots using speech and natural language. Nick Landry provides an introduction to the latest advances in Cortana integration on Windows 10, as well as the brand new Bot Framework, opening up a new realm of possibilities in human-computer interactions.

read more

Jerry Nixon @jerrynixon

APRIL 27: Build 2016 was like Christmas for UWP developers creating Windows apps. As existing features were enriched, several new innovations were unveiled to make developers more productive and apps more valuable with signature Windows experiences and capabilities. In this article, we’ll walk through the Windows announcements – every single one of them – from mapping to proximity, XAML enhancements, the Action center, and implications for cross-platform development.

read more

Sam Stokes @socalsam

APRIL 29: Skype will blow your mind if you just think Skype is only for instant messaging or voice mail. Medical telepresence may save the Affordable Care Act by making medicine more efficient. You as a developer can actually save lives by getting access to HIPAA compliance directly! What about Project Management. If you are developing Project Management tools, this is for you! In this video we will take a look at the excitement of Skype, Skype Bots and how you can generate wealth for you and society. Of everything at //Build 2016, Skype may be the quiet way to success for you!

read more

Building Things Using Fusion 360 and JavaScript

I like making things.

I used to mostly just make things that show up on the computer screen - software things. Lately, however, I’ve been re-inspired to make real things. Things out of wood and things out of plastic and metal and fabric and string.

The way I see it, we design things either manually or generatively.

By manual I mean that I conceive an idea then design and build it step by step. I - the human - am involved every step of the process. Imperative code is manual. Here’s some pseudocode to describe what I’m talking about…

// step 1
// step 2
// if step 2 value is good then step 3
// else step 4 10 times

See what I mean?

I’m not arguing that this sort of code and likewise this sort of technique for building is not essential. It is. I am, however, going to propose that it’s often not altogether exciting or inspiring. The reason, IMO, is that the entire process is no greater than the individual or organization that implements it. An individual only has so many hours in the day and is even limited in ideas. An organization can grow rather large and put far more time and effort into a problem and obviously generate more extensive results. But the results are always linearly related to the effort input - not so exciting.

By generative I mean that instead of creating a thing, I create rules to make a thing. The rules may be non-deterministic and the results completely unexpected - even from one run to another. The results often end up looking very much like what we find in nature - the fractal patterns in leaves, the propagation of waves on the water, or the absolute beauty of ice crystals up close.

What’s exciting is when an individual or organization puts their time and effort into defining rules instead of defining steps. That is, after all, the way our own brains work, and in fact, that’s the way the rest of nature works too. It’s amazing and awesome and I would venture to say it’s even miraculous.

I think a lot of my ideas on the matter parallel and perhaps stem from Stephen Wolfram’s book A New Kind of Science.

Most of the book is about cellular automata. The simple way to understand these guys is to think back to Conway’s Game of Life. The game is basically a grid of cells that each have a finite number of states - often times two states: black and white. Initially, the cells in the grid are seeded with a value and then iterations are put into place that may change the state of the cells according to some rules.

The result is way more interesting than the explanation. The cell grid appears to come to life. The fascinating part is that the behavior of the system is usually not what the author intended - it’s something emergent. The creator is responsible for a) creating an initial state and b) creating some rules. The system handles the rest. It usually takes a lot of trial and error if the intention is to create something that serves some certain purpose.

Check out Wikipedia’s page on cellular automata, and specifically look at Gosper’s Glider Gun.

I don’t know about you, but I find that completely awesome.

Okay, so when are you going to get to the point of the blog post, codefoster?

Calm down. It’s called build up. :)

First, let me say that generating graphics in either 2D or 3D is nothing conceptually new. What I like about discovering and learning an API for CAD software, though, is that I can not only generate something that targets the screen, I can generate something that targets the 3D printer or the laser cutter. That’s all sorts of awesome!

The example I’m going to show you now is a simple one that I hope will just get your gears turning. You could, by the way, take that literally and generate some gears and get them turning.

If you don’t have Fusion 360, go to fusion360.autodesk.com and download it. If you’re a hobbyist, maker, student, startup type you can get it for free.

If you’re new to the program, let me suggest the learning material on their website. It’s great.

After you install Fusion 360, the first thing you need to do is launch the program. This API is attended. It requires that you open the program and launch the scripts. I have suggested to the team at Autodesk to research and consider implementing unattended scenarios as well.

Now launch the Scripts and Add-ins… option from the File menu…

Don’t be confused by the Add-Ins (Legacy) option in the same File menu. That’s for an old system that you don’t want to use anymore.

That should launch the Scripts and Add-Ins dialog…

There are two tabs - Scripts and Add-Ins. They’re the same thing except that Add-Ins can be run automatically when Fusion 360 starts and can provide commands that the user can see in their UI and invoke by hitting buttons. Add-Ins ask you to implement an interface of methods that get called at certain times. If you simply click the Create button on the Add-Ins page, it will make you a sample with most of that worked out for you already.

Let’s focus on the Scripts tab for now.

You’ll see a number of sample scripts in there. Some of them will have the JavaScript icon… …and others will have the Python icon…

The Fusion 360 API supports 3 languages: C++, Python, and JavaScript.

Above those, you’ll see the My Scripts area that contains any scripts you have written or imported.

It’s not entirely clear at first how this works. Let me explain. If you click Create at the bottom, you’ll get a new script written in a strange folder location. It’s good because it gives you the right files (a .js file, an .html file, and a .manifest), but it’s bad because it’s in such an awkward location. The best thing to do in my opinion is to hit create and get the sample code files and then move the files and their containing folder to wherever you keep your code. Then you can hit the little green plus and add code from wherever you want.

One more nuance of this dialog is that if you click the Edit button, Fusion 360 will launch an IDE of its choice. I think this is weird and should be configurable. If I edit a JavaScript file it launches Brackets. I don’t use Brackets. I use Visual Studio Code. It doesn’t end up being that much trouble, but it’s weird.

To edit my code, I just go to my command line to whatever directory I decided to put it in and I type…

code .

That launches Code with this directory as the root. Here’s what I see…

There you can see the .html, .js, and .manifest files.

I’m not going to take the screen real estate to walk you entirely through the code. You can see it all on GitHub. But I’ll attempt to show you what it’s doing at a high level.

Here’s the code…

<style type="text/css">.gist {width:700px !important;}
.gist-file
.gist-data {max-height: 500px;max-width: 700px;}
</style>
<script src="https://gist.github.com/codefoster/0b24212710319b681453.js"></script>

Let’s break that down some.

The createNewComponent function is just something I made. That’s not a special function the API is expecting or anything. The runfunction is, however, a special function. That’s the entry point.

Essentially, I’m creating a 20x20 grid, prompting the user to select a body, and then doing a 2D loop to copy the selected body. The position is all done using a transformation that shifts each body into place and then offsets it a certain amount in the Z direction. In this case, I’m just using a random number, but I could very well be feeding data in to this and doing something with more meaning.

Watch this short video as I create a cube and then invoke this script on it…

So, here is where you just have to sit back and stare at the ceiling and think about what’s possible - about all the things you could generate with code.

My example was a basic, linear iterator. Perhaps, however, you want to create something more organic - more generative?

Check out this example by Autodesk’s own Mike Aubry (@Michael_Aubry) where he uses Python code to persuade Fusion 360 to build a spiral using the API.

That has a bit more polish than my gray cubes!

If you build something, make sure you toss a picture my way on Twitter or something. I’d love to see it.

Extensible Code

Visual Studio Code has extensions!

The bells ring, the confetti flies, the fans go wild!

The two things we all wanted from Code was…

  1. to see it go open source
  2. to get extensions.

If you were following the user voice page for Code like I was, you’d have seen way more votes for extensions than for any other feature. The size of the vote count made it look like not having extensions was a total deal breaker and for many folks I talked to… it was.

Well, now it’s here!

It’s here in full force. Not only are extensions available, but there are already a whole lotta cool extensions available in the online gallery. There were about 60 a couple days before launch - a metric that jumped over 20 points by the time Sean McBreen was showing off (here and here) the announcement at the Connect() conference. And there are obviously a lot more now just a few weeks later.

Getting extensions is like getting three wishes from a genie in a bottle and for your first wish requesting unlimited wishes. Code is a great tool, but with extensions, you can make it do most anything you want.

Some of the great things about Code extensions are…

  • they’re easy to write. To run and test an extensions, Code launches an instance of itself. It’s a bit like Inception that way. Then you can just play with your extension as it currently is and be sure it’s behaving as you designed.

  • they run in a different process. When you start up Code, it’s okay if you have 38,329,420 installed, because they’re not loading synchronously in the same process as your main editor. Granted 38M+ extensions is going to bog something down and I think you’ll have a hard time finding that many unique extensions in the marketplace any time soon, but my point is that you don’t have to worry about the performance impact of installing your favorite few.

  • they can be written in either raw JavaScript or in TypeScript, and generating them is quick and easy using Yeoman (which by the way is awesome!).

  • publishing them is just about the easiest thing in the whole world. It’s literally one command - nay, one short command… vcse publish

My Favorite Extensions So Far

I haven’t found the time to install every extension (who has that kind of time?!), but here are three of my favorites so far…

MDTools

Markdown (.md) files are really handy. If you’re not familiar with markdown files, just think of them as a cross between text files and HTML files. Text files are nice because they are very readable. Markdown files are readable still but they give us the ability to easily bring in rich content like hyperlinks, images, and formatting. One of the great additions to markdown is the ability to indicate spans or blocks of code and even in some cases to specify the code language and get great formatting.

So it’s no surprise that markdown files have become the standard for creating documentation and meta text for code repositories. Developers work with markdown a lot and it’s exciting to have a bit of help.

The MDTools extension allows you to do a lot of those little things to a selected block of text. You can convert to upper case, lower case, or title case, you can HTML encode or decode, and you can even convert to ASCII art - an extremely fun use case! To activate these tools, install the extension, restart the editor, select some text, and then use the ALT + T keyboard shortcut.

There’s a lot more the MDTools extension can do to, so check it out.

Quick Snippet

Quick Snippet is a great idea for an extension by my colleague Sara Itani (@mousetraps). See my interview of Sara on episode 048 of my podcast CodeChat. It allows a developer to highlight a block of text they’ve written and quickly and easily create a snippet out of it. In my experience, it takes a little bit of discipline to create snippets today to save time tomorrow. This extension excites me because it removes some of the friction and makes snippet creation fast. Now I can save time on saving time!

Twitter

This one’s just cute and fun and shows off the power and versatility of extensions in Code. The Twitter extension let’s you read and even write

Create Your Own

Now here’s the real winning tip. You don’t have to just check every week to see if someone has created your new doodad yet. You can just build it yourself!

If you’re wondering if it’s hard, it’s not. If I can make an extension, you can.

Watch this. I’m going to build the hello world extension from start to finish in just over a minute. Granted I sped it up a little and skimmed over the long running npm install bits, but still. You can see that it’s an easy process. Note: this assumes you have Node.js and Visual Studio Code installed already.

If that went just a little bit too fast for you, you can get the complete tutorial by going to code.visualstudio.com/docs/extensions/example-hello-world, and for a bunch more information about getting started creating Code extensions, go to code.visualstudio.com/docs/extensions/overview.

My team has put together a bunch of different videos and blog posts to sum up the announcements from Connect(). You can see the rest of them by visiting Jerry Nixon‘s post Inside the Code: What’s New with Visual Studio.

VS Code Goes Open

Visual Studio Code is now open source.

Me: What do you think of Visual Studio Code?
Some Dude: It’s awesome. I just wish it were open source.
Me: You need to fork it? Tweak it?
Some Dude: No.
Me: Okay.

I get it. I like open source stuff too.

Realistically, there are few products I have time to fork and fewer still that I have need to fork.

But even when I have no need to fork a project and no intention to submit a pull request any time soon, still I want it to be open source. Why? Because… freedom.

I like closed source products too, actually. Closed source products can be sold. Selling products earns a company money. Companies with money can create big research and development departments that can tinker with stuff and make new, cool stuff. And ultimately, I like new cool stuff.

The best scenario for me, a consumer, though, is when a big company with a big research and development department can afford to make something cool and free and open, because they make money on other products.

Some products (think Adobe Photoshop) are obviously a massive mess of proprietary code that feel right to belong to their parent company. They need the first-party control.

Others, like Code feel more like they belong to the community. That’s how I feel anyway.

And now I can. Visual Studio Code is officially OSS!

In case you missed it, Microsoft announced at Connect() 2015 that Code was graduating from preview to beta status and that it would be open sourced.

To see Code’s code comfortably settled into its new home, just head over to github.com/microsoft/vscode. From there, you can clone it, fork it, submit an issue, submit a PR… or look at what the team is working on and who else is involved. You know… you can do all of the GitHub stuff with it.

So there it is. It’s not only free as in “free beer” now, but also as in “free speech”.

The actual announcement is buried in the keynote, so the best way to get the skinny on this announcement, the details, and the implications is to watch the Visual Studio Code session hosted on Connect() Day 2 by @chrisrisner. The panel shows off Code in serious depth. It’s a must-see session if you’re into this stuff.

One of the more exciting things they showed off is actually the second gigantic announcement regarding Code… the addition of extensions to the product, but that’s a big topic for another day and another blog post.

What exactly does the open sourcing of Code mean for you? As I mentioned, you may or may not be interested in ever even viewing the source code for Code. The real gold in this announcement is the fact that Code now belongs to the community. It’s ours. It’s something that we’re all working on together. That’s no trivial matter. Microsoft may have kicked it off and may be a huge contributor to it here forward, but so are you and I.

So whether you’re going to modify the code base, study the code base, or just take advantage of the warm feeling that open source software gives us, you know now that the best light-weight code editor for Windows, Linux, and Mac, is ready for you.

Let’s have a quick look at the code for Code using Code. ​The official repo is at http://github.com/Microsoft/vscode. So start by cloning that into your local projects folder. My local projects folder is c:\code, so I do this…

Then, you launch that project in Code using…

You’ve got it now. So I just added “codefoster” to a readme.md file to simulate a change and then hit CTRL + SHIFT + G to switch to the Git source control section of VS Code, and here’s what I see…

Notice that the changed file is listed on the left and when highlighted the lines that were changed are compared in split panes on the right. Checking this change in would simply involve typing the commit message (above the file list) and then hitting the checkmark.

This interface abstracts away some of the git concepts that tend to intimidate newcomers - things like pushing, pulling, and fetching - with a simpler concept of synchronizing which is accomplished via the circle arrow icon.

It’s important to note that I wouldn’t be able to check this change in here because I don’t have direct access to the VS Code repo. Neither do you most likely. The git workflow for submitting changes to a repo that you don’t have direct access to is called a pull request. I’ll leave the expansion of this topic to other articles online, but in short it’s done by forking the repo, cloning your fork, changing your files, committing and pushing to your fork, and then using github.com to submit a pull request. This is you saying to the original repo owner, “Hey, I made some changes that I think benefit this project. They are in my online repository which I forked from yours. I hereby request you _pull _these changes into the main repository.

It’s quite an easy process for the repo owner and I don’t think a repo owner on earth is opposed to people doing work for them by submitting PR’s. :)

Again, getting involved simply means interacting and collaborating on GitHub. Here’s how…

  • Check out the list of issues (there are already over 200 of them as I type this) on microsoft/vscode repo.
  • Chime in on the issues by submitting comments.
  • Create your own issue. See how.
  • Clone the code base using your favorite git tooling or using git clone https://github.com/microsoft/vscode.git on your command line. That will allow you to git pull anytime you need to get the latest. Having the code means you can browse it whenever you’re wondering how something works. See how.
  • Fork the code using GitHub if you want to create a copy of the code base in your own GitHub repo. Then you can modify that code base and submit it via a pull request whenever you’re certain you’ve added some value to the project. See how.

And you can chatter about Code as well on Twitter using @Code. As to how they got such an awesome handle on Twitter I have no idea.

Also check out my mini-series I’m calling Tidbits of Code and Node on the Raw Tech blog on Channel 9 where I’ve been talking a lot about Code (and Node) and plan to do even more now that the dial for its awesome factor was turned up a couple of notches.

Happy coding in Code!