Highlighting the Beauty of Rx

Some time ago, myself and a small team of guys dedicated one evening a week to working on an app.

After the formulation of a ton of good ideas and some real progress on the project, we came to the unfortunate realization that we just didn’t have the after-hours bandwidth the project required.

I still wish I did though, because it’s a good idea, and the idea is often the hardest part of any project.

I don’t want to dive into the details of the project, but I do want to share the pattern we were pursuing - the observable pattern.

The first time I saw Reactive Extensions (Rx) I had a jaw drop experience. Its elegance was apparent despite its implementation being a bit complex. It’s one kind of complex at first and continues to be another kind of complex the more you use it. Since then I’ve been looking for excuses to use this pattern and this library and have found a few, and our app was one of them.

The app I’m alluding to is a game, and it handles a bunch of game data that happens to represent real life players with a mobile device and a GPS, but it could just as well represent 2D or 3D sprites or something besides a game at all.

Without the low-level context, I need you to understand what was going on in the app and that shouldn’t be too difficult.

Imagine every possible event that might occur in a game - everything. A player might move - even a small distance. A player might join… or quit… or shoot… or whatever. These are considered GameEvents.

Now imagine all of these events in one giant stream. That’s right one flat structure. Sort of like a Redux store or a transaction log.

Now imagine all of these events funneling through a single observable inside the game service (the service all players are sending their game events to).

And that should give you enough context to understand what I’ll share next - an observable-based engine for processing game rules.

Now before I embark, know that one of the biggest advantages here is that this general pattern gives us the flexibility to define whatever sorts of rules we want. So one set of rules would implement one game, and another set of rules would implement something altogether different.

Let’s say we want to write a rule that is only interested in when a player has physically moved (as it turns out, that’s one of the most interesting events in the game). In the Rx world, that would look something like…

var playerMoves$ = game.Events
.Where(ev => ev.Type == GameEventType.PlayerLocation);

Note that I’m writing C# code here because that’s what we started with, but this should look pretty similar to some other popular languages you might be using.

What that code says is that I want to declare a new observable (playerMoves$) that is a filtered set of the entire set of game events - only the ones of type PlayerLocation.

Since the player location changes are such an important event, it’s good to set that one up to feed the others. Now let’s get on to another…

//any player collides with any other player
var playerCollisions$ = playerMoves$
.Select(pl => new { PlayerLocation = pl, CollidingPlayers = pl.Game.Players.Where(other => other != pl.Player && other.Location.Distance(pl.Location) < 5) })
.Where(c => c.CollidingPlayers.Any());

This rule depends on the playerMoves$ we declared and set in the previous block and extends it.

This one projects each player that just moved into a new anonymous object that includes any other players that are very close to him (in this game proximity determines a “collision”).

Then we chain the .Where function on there to say that we’re only interested in occurrences where there was a collision (that’s the .Any part).

If you don’t understand that code, spend some time with it. Print it and take it to dinner with you. Put it on your nightstand. This is the sort of code block that looks bizarre first and elegant eventually.

Okay, now I’m only going to take you one step further, and I’m going to do so because although I’ve been calling these “rules,” you haven’t seen a real rule yet.

These were conveniences. These were the application of a couple of Rx operators that essentially gave us some alternate views into that massive stream of game events.

The playerMoves$ gave us a subset and the playerCollisions$ gave us another subset. To create a real rule, we need to take some action. Watch this…

playerCollisions$
.Select(c => new {
c.PlayerLocation,
CollidingPlayers = c.CollidingPlayers
.Where(cp => cp.Team() != c.PlayerLocation.Player.Team()) //make sure it's a collision with an _opponent_
.Where(cp => c.PlayerLocation.Location.Intersects(cp.Team().Zones.Single(z => z.Name.StartsWith("Zone")).Definition)) //in opponent's territory
})
.Subscribe(c => {
//send the player to jail
c.PlayerLocation.Player.NavigationTarget =
c.CollidingPlayers.First().Team().Waypoints.Single(w => w.Name == "Jail");
});

So this block starts with that convenience observable - playerCollisions$.

Then it projects it to an anonymous object that includes the player(s) that are in collision. In that filter, the colliding players are filtered to only the players that are a) on the other team and b) in the other player’s area (zone). This rule actually comes from Capture the Flag in case you didn’t recognize it and occurs when a player gets tag running in another player’s territory.

And then what may be considered the interesting part if I weren’t such a geek and found all this stuff to be interesting :)

The .Subscribe method. This method determines what happens when this sort of collision occurs. In the case of Capture the Flag, the player is to be sent to jail - the other player’s jail that is. Thus…

c.PlayerLocation.Player.NavigationTarget =
c.CollidingPlayers.First().Team().Waypoints.Single(w => w.Name == "Jail");

That is… set the player’s (the one that got tagged) navigation target (where the app tells the player to go) to the other teams waypoint labelled “Jail”.

And that’s as far as I’ll go.

Remember, the purpose here is to help you understand why you might choose to use the observable program in your application and to show you how terse and elegant it can make your code.

Happy hacking!

Edge Device Discovery - an Unfinished Project

The Team

Team Member Project
Masha Reutovski Project Manager
Bret Stateham BLE Communicator
Gandhali Samant BLE Scanner
Kristin Ottofy Sync Engine
Joe Raio API
Jeremy Foster UI

A diverse group of technical engineers and one project manager from Microsoft’s Commercial Software Engineers (CSE) group. This project was an initiative that Bret Stateham submitted for Sync Week hacks.

Project Overview

This IoT Edge Device Discovery project is built on the Azure IoT Edge service. First, we’ll discuss Edge and then this project’s added value.

Azure IoT Edge

IoT Edge is a service that comes as part of Azure’s IoT offering. It is intended to run on field gateway devices (“edge” devices) and facilitate the aggregation of data from other devices in an on-site IoT solution - devices that may not have the ability to communicate directly with the cloud or for whatever other reason should send their data through a gateway.

Azure’s IoT Edge service is undergoing a big transformation from version 1 to version 2. Version 1 is already in the wild. Version 2 offers some dramatic benefits such as containerized modules that can be run on the edge or in the cloud, but this version is still in private preview and undergoing breaking changes.

In this project, we opted to focus on IoT Edge v1. We are fairly confident that any value added would not be difficult to port to version 2 in case the opportunity arises. We also recognize that IoT Edge v2 may include some functionality that partially or perhaps even entirely overlaps with this project.

IoT Edge v1 offers multiple development paths, including native development in C++, NuGet packages to boot strap .NET development, Maven packages to get started with Java, or npm packages for Node.js developers. We chose to go with the Node.js development path in based on initial research around the noble npm package for access Bluetooth Low Energy (BLE) devices in Node.js.

IoT Edge v1 can be run on a variety of devices and operating systems. For this project, we opted to use the Raspbery Pi 3 running Raspbian Jessie as the gateway device because it was known to be compatible with IoT Edge v1 and had an integrated Bluetooth hardware stack that was known to be compatible with the noble npm package.

Finally, BLE is a popular standard and there are countless devices that could be discovered and communicated with. For this project, we focused on the TI Sensor Tag CC2541 and CC2650 as our reference devices. These sensors have a number of sensors we could leverage and provided a good model for other BLE devices.

IoT Edge Device Discovery

In IoT Edge as it exists today, if a solution administrator needs to pull a new device in to the network to start recording and sending data to the cloud, the process is a bit difficult. The devices that might be added could be speaking various protocols, but for this project we focused on BLE devices.

The current process for bringing new BLE devices into a solution to start getting new data looks something like this…

  • new BLE device is brought into the proximity of the solution
  • admin manually retrieves the device’s MAC address and characteristics array
  • admin adds the MAC address and characteristics to the IoT Edge configuration file
  • admin restarts the edge service

This solution would provide a means for these devices to be discovered automatically and simply approved by solution administrators. The process would look more like this…

  • new BLE device enters the premises
  • Edge service sees the device (including its MAC address and entire characteristics array) and submits it to a cloud service for storage and approval (Edge does not yet begin receiving communication from the device or acting on its reported data)
  • admin is notified and directed to a web portal to approve the device and configure the system’s behavior for using the device’s data
  • admin either clicks approve or deny for the device
  • upon approval, the Edge service begins acting upon data reported from the new device

This system would obviously be extended to support other network protocols besides BLE.

Architecture

In its current state, the solution consists of the following components…

  • BLE Scanner: the BLE Scanner module is specific to the BLE protocol and would be duplicated for other network protocols. The scanner is just another Edge module and constantly scans for BLE devices in the proximity of the gateway’s BLE radio. Upon seeing a device, the scanner reports the device and its characteristics array (the data points the device is capable of communicating) to the Sync Engine (also an Edge module) using the IoT Edge Message Broker. The Sync Engine is not concerned with whether devices have been discovered and reported in the past or whether they’ve already been approved or denied. It simply reports what it discovers.

  • Sync Engine: the Sync Engine is also an Edge module and contains the majority of the business logic for this project. It receives information from the BLE Scanner module about what devices have been discovered nearby, their MAC address, and their characteristics array, and it keeps information about these devices synchronized with the data service in the cloud (via the API). It likely receives duplicate devices from the device scanners, but maintains last known state both locally and in the cloud.

  • BLE Communicator: The BLE Communicator is specific to the BLE protocol and would be duplicated for other network protocols. The communicator is also an Edge module and is responsible for communicating with the entire array of approved BLE devices. This is in contrast to IoT Edge’s default, native BLE module that is delivered with the product, which is only capable of speaking with a single BLE device. The BLE Communicator module maintains configuration on disk as well as in memory and relies on the Sync Engine module to update its configuration and let it know which devices (and which characteristics) it should be communicating with.

  • API: the Sync Engine runs serverlessly as an Azure Function. It provides endpoints for the Sync Engine and UI. The API allows the Sync Engine module to submit newly discovered devices (and their characteristic arrays) or update existing ones. The API then provides this information to the UI. The API is designed as a REST-compliant interface and thus relies on HTTP GET, POST, PUT, and DELETE operations against entity endpoints - the primary endpoint being the list of devices which may be more clearly understood as device approvals.

  • UI: the UI is the only interaction point for solution administrators and allows the admin to determine which discovered devices should be considered by the Edge service, which of those devices’ characteristics should be read, which should be written, and on what schedule (i.e. once, periodically, etc.). The UI obviously relies on the API to ultimately take effect in the Edge service.

Components

The Scanner

Principal Developer: Gandhali Samant

Overview

The role BLE Scanner module, as mentioned above, is to discover BLE devices in range of the IoT Edge v1 gateway device. The module was written using Node.js and leverages the noble (https://github.com/sandeepmistry/noble) npm package. Noble supports both Windows and Linux and is the most popular node.js package for BLE communication. This module is intended to constantly scan for new BLE devices and their characteristics. When a new device is discovered the module generates a new message containing the devices MAC address and GATT characteristics and publishes the message to the IoT Edge v1 Message Broker for consumption by other modules.

Challenges

  1. IoT Edge v1 implementation doesn’t support the use of native Node.js modules. The noble npm package is a native npm package (meaning it has to be compiled for the platform) and we were unable to create an IoT Edge module that tried to load the noble package. The solution was to use the proxy, or remote, module patter as discussed here: https://github.com/Azure/iot-edge/blob/master/samples/proxy_sample/README.md . However, that presented it’s own challenge as discovered in #2.

  2. The Node.js implementation of the out of process proxy module is buried in a subfolder of the IoT Edge v1 GitHub repository and can’t be referenced directly from Node.js We attempted to extract that folder only and create a locally linked npm package to depend on, but ultimately ended up having to move that code into our own repo (https://github.com/bretstateham/azipg) so we could create a dependency on it from our IoT Edge v1 module.

  3. The noble BLE implementation was great in that it was able to discover BLE devices, but it turns out there were hundreds of BLE devices available. We added a MAC address filter to discover and report only on BLE devices with MAC Addresses that started with “54:6c:0e”, the prefix used by Texas Instruments CC2650 Sensor Tags to limit the number of devices we published.

Successes

Once the challenges above were overcome, the module was able to successfully scan and discover the two TI CC2650 Sensor tag devices we had on hand. Once discovered, the details of a BLE device were collected, placed in a JSON payload, and published via the IoT Edge v1 Message Broker.

Future Development

The module will currently continue to publish the MAC address of a BLE device even if it has been previously discovered and approved or rejected. It would ideal for it to be able to use a local data store to identify only new BLE devices that need to be reported.

The Sync Engine

Principal Developer: Kristin Ottofy

Overview

The Sync Engine IoT Edge module awaits to receive a message from the Scanner module that a new BLE device has been discovered. It then checks a local file to determine if the device has been approved or not. If the device is not listed in the file, then the Sync Engine calls the get-approval API to alert the user of a new approval request on the UI and adds the device information to the local file. The Sync Engine asynchronously and routinely calls the get-devices API to check if the UI has updated the database. If it has, then the Sync Engine will reflect those changes in the local file to retain state on the gateway device and publish a message on the IoT Edge broker for the BLE Communicator Module to begin communication with the newly approved device. This module was written in Node.js and developed using Raspian Jesse on a Raspberry Pi 2.

Challenges

Many of the challenges with this module were presented during the architecture phase. Retaining state across device power cycles or updates proved to be one challenge. The decision to use a local JSON file to store important information allowed us to get up and running quickly during the hackathon.

Successes

As this portion of the project is continuing development, successes have been made so far with communicating across the gateway message broker, storing information into the local file, making necessary API calls, and posting messages to the broker through various npm packages.

Future Development

There are opportunities available within the gateway device that could support the Sync Engine module through IoT Edge v2. Having a localized database would eliminate the need for the local file and allow for quicker checking of approved devices.

The BLE Communicator Module

Principal Developer: Bret Stateham

Overview

The BLE Communicators role is to implement the actual communication with the approved BLE devices. A single instance of the module is used to communicate with ALL of the configured BLE devices as opposed to a single module instance per device. In addition to multiple devices, the module needed to support multiple communication patterns with the GATT characteristics on any given BLE device. The actual GATT characteristics and their usage pattern is be supplied to the BLE module via the IoT Edge v1. configuration mechanism:

  1. Read Once at Init: A characteristic that is read once at the beginning of communication with the device. The GATT Characteristic value would be read, and included in a message sent to the IoT Edge v1 Message Broker. Read Once values typically include device metadata like Manufacturer, Firmware version, Serial Number, etc.

  2. Write Once at Init: A characteristic that would be written to once at the beginning of communication with the device. The value to be written would come from the module configuration. This is often used to initialize the BLE device itself by enabling sensors, notifications, etc.

  3. Write Once at Exit: A characteristic that would be written to once at the end of communication with the device. The value to be written would come from the module configuration. This is often used to turn off sensors, or features on the device to help reduce it’s power consumption when not in use.

  4. Read Periodic: A characteristic that is read at a regular interval (the interval specified in the config). All periodic read sensor values would be collected and published to the Message Broker in a single payload.

  5. Read Notification: A characteristic on the BLE device that supports notifications. The characteristic’s value will be published individually to the IoT Edge v1 Message Broker.

Challenges

This module shares the same core development foundation as the BLE Scanner above, and as such the same challenges around IoT Edge v1’s limitation around native npm packages. See the BLE Scanner challenges above for more details.

In addition to those challenges, we had some concurrency issues in the Node.js code that we were unable to resolve during the timeframe of the hackfest. The noble implementation is naturally asynchronous, but we were having issues maintaining the context of a characteristic read once the value was returned. We attempted numerous patterns include the use of promises, and the “async” module, but were unsuccessful.

Successes

We were able to get the module to read it’s configuration via the IoT Edge v1 configuration mechanism and initiate communication with the specified BLE devices.

Future Development

The code for this module needs to be refactored to properly leverage the asynchronous behavior of the noble module. In addition, the implementation of the various usage patterns above need to completed.

The API

Principal Developer: Joe Raio

Overview

We exposed four Azure Functions as our API for device management. This would be accessed by the front end to list all devices, get details on a specific device, create a new device, and update the properties of a device. All functions were written in node.js and setup and triggered via HTTP.

API Development, Debugging & Testing

We developed the functions locally using both the Azure Functions Core Tools and VS Code. This allowed us to rapidly iterate through changes as well as debug our code. This saved us a tremendous amount of time vs having to deploy to Azure each time we needed to verify our code updates.
Postman was used to both test API calls locally and against the live site. This allowed us to modify our request body on the fly and send GET, POST, and PUT requests to the API.
Challenges

  1. Proxy Routes using /api – We set out with a goal of being able to call /api/device using different methods (i.e. POST, PUT, GET) which would in turn route to different Azure Functions. To do this we had to enable the use of Function Proxies. When doing this though it would not allow us to use /api in the route prefix because /api is the default route when creating a new function. To overcome this we modified the host.json and changed the default route for functions to /func. This allowed us to then use /api/device with our proxies.

  2. MongoDB API – It was decided that that MongoDB API would be used to interact with CosmosDB. Because of this we were unable to use the built in CosmosDB bindings for Azure Functions. We had to use the Mongo npm packages and write custom code to read / write / update records in the database. While this was not a huge hurdle it would have been cleaner (and faster) for us to use the default DocumentDB api. Future version of the API will use this.

  3. CORS – Early on we ran into CORS issues when trying to access the API from our front-end application. We found that when using proxies our default CORS rules were overwritten. We got past this by adding custom headers to each function directly in the code. Further testing needs to be done to determine the exact cause of this issue.

The UI

Principal Developer: Jeremy Foster

Overview

One part of the overall project workflow required a user interface – the authentication of found devices. For this, we turned to Angular and got a bit creative and modern in how we hosted this application – serverlessly!

Angular

Angular’s CLI makes getting started with a new website pretty quick and easy. Angular is a good, modern choice for a UI and offers plenty of features for this application.

Using the CLI, we had a basic site in just a couple of minutes. Then we added a simple DeviceList component and displayed this component on the main page… nothing fancy… one component.

The most interesting part of the UI was the DataService, which is responsible for fetching devices from the API, displaying them in the UI through the device list component, and keeping the list up to date as new devices are discovered and administrators approve or deny devices.

The next step in this part of the project would be to create another Angular component – perhaps called Device – that the DeviceList component would repeat. That Device component would then contain all of the UI and logic for user interactions for managing the devices – for instance, an Approve button and an Always Ignore button.

Next, because we started with BLE devices for this project, the individual found devices would need to have their characteristics (the properties on each device we’re able to read/write data values from/to) enumerated and give the administrator the ability to determine which characteristics are interesting and how those characteristics should be read (i.e. once, periodically, etc.).

REST Architecture

The API was designed to follow a pure REST architecture, so the higher level operations were absorbed by the UI’s DataService. In the future, a data access layer of sorts could be implemented in a separate or the same API project to make calling from our UI or other UI formats simpler and more consistent.

As an example, in order to keep the API pure REST, a call to approve a device would be something like…

PUT /api/device { "id":14, "approved":false }

In the UI’s DataService, however, that would simply be a call to a higher level function like this…

approveDevice(14);

Serverless Hosting

Being the UI is composed of all static files, we could serve it as a Serverless website by using an Azure Function with a custom proxy.

To do this we first created an empty blob container. In this container, we placed the production output of the Angular App (i.e. the /dist folder). Then, using a custom proxy route we routed all requests for /{restofpath} to the public url for the container.

The route definition is as follows:

"root": {
"matchCondition": {
"route": "/{restOfPath}"
},
"backendUri": "https://%mycontainer_uri%/client/{restOfPath}"
}

With %mycontainer_uri% being an app setting for the URI for the blob storage account.

By doing this we avoid having a web app using 24/7 just to serve up static files. When a request is made, the Azure function simply pulls the file from blob storage and serves it to the browser.

You can view the live site here: https://edgediscover-functionapp.azurewebsites.net/index.html

To deploy the UI we used VSTS to create a custom build process with the following steps:

  1. Get Sources – This gets the latest files that were committed to the repo

  2. npm Install – installs all the required npm packages

  3. npm run build-prod – this produces the output of the UI in the /dist folder

  4. AzCopy – this then takes the output and copies it to the specified blob container.

Conclusion

Like many good projects, this one is unfinished, but I hope you have learned like I have to embrace unfinished projects. If you have to bring everything to completion, you may not start some things even though there may be a lot to learn. I certainly learned a lot on this one.

docs.microsoft.com

News flash. Microsoft is a big company.

It’s people big. I have hundreds of thousands of colleagues.

It’s geography big. We have offices, cloud datacenters and regions, and products around the globe.

It’s facility big. We have many campuses and hundreds of buildings here in the Pacific Northwest and more around the world.

It’s also products and services big. We have hundreds of products, services, platforms, libraries, frameworks, and hardware products by which we attempt to fulfill our mission statement to empower every person and every organization on the planet to achieve more.

As a developer, I think of Microsoft as an ecosystem, and it is increasingly an open ecosystem that provides essential developer tools without locking in the people that are building things. We all hate vendor lock-in.

Sometimes when you’re using a single product, a service, or a framework, it’s easy to get confused or overwhelmed when there are multiple entry points into the documentation, and when you’re talking about a developer ecosystem it’s even worse.

If you’re a developer, then, I want you to know that there’s…

ONE LINK TO RULE THEM ALL

…there’s docs.microsoft.com

Here’s what that looks like in your browser…

screenshot

This site is the home for Microsoft technical documentation, API reference, code examples, quickstarts, and tutorials for developers and IT professionals, and it is your single entry point for learning how to consume an Azure service, install Visual Studio, build a Docker image for a .NET Core Application, use the Node.js Driver for SQL Server, interact programmatically with your Azure bill, and loads more.

Are you looking for the Azure Application Architecture Guide. Look no further.

Do you want to get started building a bot? Have at it.

Need the skinny on Authenticating Users with Forms Authentication using Visual Basic? Uh… okay… there you go.

In the past, you may have visited MSDN or TechNet to get the lowdown on how to do what, but going forward, it’s all migrating to Docs.

You should take note too that many of the documentation pages have a header like the following with a date, an indication of average time to read (super helpful), and a list of contributors…

contributors

So Microsoft’s documentation, like code itself, is a collaborative effort - an open source project - and in many cases you’re encouraged to contribute! Just look for an Edit link like this one, and you’ll be whisked away to the GitHub repo where you can fork and PR.

edit

Finally, have a glance at the Docs team’s blog to see what’s new. For instance, did you know there’s a new PowerShell Module Browser? Yeah, I didn’t either.

Have fun.

Canva.com Colors

I host a page at /media that is a resource of stock assets - images, illustrations, video, fonts, etc.

I originally made this resource for myself because I was always forgetting what my favorite stock asset sites were. Over the years, though I’ve gotten a lot of traffic on this page proving that I’m not the only developer that dabbles in graphics and isn’t foolish enough to try to create everything from scratch. I don’t have the time, talent, or inspiration to consider that.

On that page, I ask readers to recommend more resources and recently somebody did.

The folks at Canva.com let me know about their free online color tool - canva.com/colors, and I was so impressed that I decided to blog about it as well.

When I start a new website, create a new brand, or even start putting together a photo album for my family, I want to pick a color pallet that has some chance of looking good. In the past, I’ve used sites like Adobe Color CC (formerly Kuler), but I’ve always found them to be overkill. Canva Colors instantly struct me as a simple and clean alternative. I was also impressed right off the bat at their inclusion of some design learning. That’s just what most of us developers need - to get a bit more design savvy.

I moved on from their /colors tool to check out the rest of their site and as a web developer, I’m impressed. For example, just take a look at their About page. That’s snazzy.

By the way, Canva asked me if I’d like to include a link to their tool on my /media page, but they didn’t ask me or pay me to blather on about how cool their stuff is. I’m just impressed.

So head over to canva.com and check it out for yourself.

My Top 5 Favorite Things at Maker Faire 2017

I attended the San Francisco Bay area recently to help welcome makers from around the globe to Maker Faire 2017.

This stalwort of all things maker is an inevitable blast. If you’ve ever been victim of maker’s block, this event will unstick you, and if you’ve ever been tempted to think that you were the most creative person on earth, this event will offer appropriate humility.

Here are the top 5 things I came across that I can’t wait to research, order, make, and talk about…

Microsoft Make Code

I know it seems like cheating to pick the Microsoft booth for this list since I work there, but hey, it’s my blog. And I think I would have picked it anyway, because the impact of the booth was awesome. The folks at Maker Faire seemed to agree too and showed it with two ribbons.

ribbons

One of the showcased products at the booth was Microsoft Make Code.

Make Code is a new in-browser IDE from Microsoft that makes IoT development with a select few partner hardware boards about as simple as you can imagine. If you own a supported board (which we were giving away all weekend), check out these getting started steps…

  1. browse to makecode.com
  2. plug the device in to your USB port

That’s it!

We had everyone from 5 to 95 walking through a tutorial to write their first IoT app, and it was brilliant to see so many lights turn on - on the boards and in the minds of the new IoT hackers that were being made.

While I’m on the subject of awesome Microsoft displays, you can’t beat the Intelligent Kiosk app for Windows 10 that does a phenomenal job of showing of Microsoft Cognitive Services. This app take a picture every few seconds and runs it through Microsoft’s Cognitive Services API. It does things like associate your face with a dog breed, guess your age and gender, or try to determine your emotion. The results are comical.

You can download the app yourself too. There was hardly a single moment the entire 3-day weekend that there wasn’t a full crowd around each of two Intelligent Kiosk displays making silly faces and laughing out loud.

Maslow

Maslow (maslowcnc.com) is essentially an inexpensive and entirely open project for building a drawbot with a router. You’ve seen the drawbots before I’m guessing where two motors suspend a pen-wielding carriage on a steeply angled drawing surface. Drawings from the computer are translated into data that drives the motors and extends or retracts the pen to end up drawing a picture.

The Maslow is like that except that instead of a pen, it’s a router spinning at tens of thousands of RPM with a razor-sharp bit at the end. Yeah! Additionally, the plunge of the router is controlled, so you can program the depth of cut.

Check this out…

The net result is the ability to extract whatever 2D shapes you want from a large piece of plywood.

The interesting things about Maslow from my POV are…

  • It’s cheap. You can get kits for under $500 to put the entire thing together
  • It’s compact. Since it’s upright, you can fit it in a tight space.
  • It’s open. You can extend or adapt the project to your needs.

Goliath CNC

Similar to the Maslow CNC router I already mentioned, the Goliath CNC project cuts things out for you, except instead of suspending a carriage it has you leave your workpiece flat and drives around it on a robot.

It’s like this…

Sometime ago I looked into the Shaper Origin and got excited about the ability to cut things out of stock of whatever size. Traditional CNC routers constrain you to a fixed size for your work piece. The impressive thing about both the Maslow and the Goliath as compared to the Origin is that not only do you get the infinite working area, but you don’t have to directly attend the cut. I wouldn’t leave the room, mind you, but the operator’s role is reduced from router-weilder to router-sitter, and that’s a bit of a relief.

I don’t know which - if any - of these machines will rise to earn the title of most useful in the long run, but they are all super good ideas and I’m excited to see evolve.

Monoprice 3D Printers

I’m big on 3D design, but I’ve yet to purchase my own 3D printer. This is partly due to the fact that I have access to some in nearby maker spaces.

If I were to purchase a printer today, though, I think I’d get one from Monoprice. Their MP Select Mini 3D Printer V2 is only $219, and their new Mini Delta 3D is available (for only 5 more days!) on Indiegogo for only $169!

You can count on problems with a printer at these price points, but then, you can pretty much count on problems with 3D printers at most price points. It’s hard to make a system reliable when there are so many variables.

The Monoprice’s printers are quite popular would seem to indicate ready availability of replacement parts to either buy or print.

Monoprice represented at the faire this year and showed off both their classic Mini as well as the new delta, and it’s great to see both in action.

PLY90

Sometimes it’s the simple things that have huge impact - like PLY90.

PLY90 bracket

PLY90 is an aluminum bracket that holds plywood together at a 90 degree angle. Simple. But the projects you can make from something like this are endless. Here are a few I liked…

zig zag wall shelf
wall shelf
rolling bench

See more designs that take advantage of the PLY90 bracket at plyproducts.com/collections/projects.

Hydroponics A-Frame System

Bruce Gee of Waterworks was fascinating to listen to as evidenced by the constant crowd of folks standing around asking questions and busily writing down what he shared about his hydroponics experience. Bruce has a way of making hydroponics sound easy.

a-frame system

Bruce used simple and inexpensive lumber and PVC pipe to create an A-shaped structure for running water over the roots of plants, and that was pretty much the end of the story. Most hydroponics systems I’ve seen incorporate lighting and control systems that certainly add to crop growth, but also to overall complexity and threaten to to intimidate your average home farmer.


If you have never been to a Maker Faire, I beg you go to makerfaire.com and find one near you. We are all creators. You are too.

So what’s your next creation?!

Code Writer's Workshop 2017

You can view or download the PowerPoint deck for this presentation at codefoster.com/deck/cww2017.

I delivered a session today at Code Writer’s Workshop in Seattle.

Code Writer’s is a meta-topic workshop. By that I mean that you don’t attend to learn how to create a web service or how to implement MQTT messaging. You go to learn about all the other topics that revolve around a career in software development.

My sesson was titled Developer Life Skills, and it was easily the softest topic I’ve delivered to date.

The goal was to look both at how a software engineer can apply his particular skills to the rest of life - eating, family, sleeping, productivity, etc. as well as to explore how these lateral life topics affect their day-to-day work.

I ventured out a bit and organized my content into 5 chapters - meaning, beauty, truth, community, and efficiency.

Meaning

My first goal was to dash hopes and dreams by reminding the audience that technology is intrinsically meaningless. It’s true. We spend so much time on technology itself, when the really interesting things happen in the application of technology and especially in applications that enrich lives and enable people.

I showed a video that I love about Saqib - a software developer at Microsoft who’s blind and who created an application that allows him to have whatever he’s looking at explained to him. It’s a great example of technology that enriches life.

Beauty

You might wonder how beauty applies to software development. I did too until I thought about it and did some research.

Among other points, I shared how my definition of beauty has less to do with attractiveness and more to do with severity. I shared one example from my life where I experienced the most raw, real beauty - on a big ocean sail trip down the west coast where I watched a sunrise all alone for more than 2 hours, feared for my life in large seas, and was inspected closely by a curious fin whale for a full 45 minutes.

Those of us involved in the creation of software have the relatively rare opportunity to explicitly work on something that’s both creative and very technical, and that’s a lot of fun.

Truth

Next up was truth.

I’ve long thought that most any venture and certainly a technical venture is made up of…

You might have all of the resources and tools you need for the job, but without the passion and vision and inspiration, you’ll have a tremendous headwind.

Another of my favorite life lessons in the truth category is that when you are trudging through new concepts and feel lost… keep trudging! You’re learning all the while even though you don’t understand yet, and in fact, you’re very likely expanding your mind not only to new information, but new concepts altogether. If you bail you’ll miss out and if you make a habit of bailing you’ll wind up narrow.

Community

Next up, in the topic on community, I reminded folks that we build software together and we rely on each other.

I learned in scuba diving training a long time ago, that at some point you take what you’ve learned about keeping yourself alive, and you apply it to the divers around you. You show up at a dive site with all of your preparation done, safety checks made, and redundant gear ready, and then you look at the guy next to you and make sure he’s ready and able and safe.

I also asked what’s more important to a software language, platform, or framwork: great syntax and features or a strong developer community. The former is obviously important, but not so much, I would argue, as the latter.

Efficiency

Finally, I said that we need to be efficient and productive in the entire course.

I mentioned the importance of exercise, the importants of a refined and focused personal mission statement, and I shared how much I’ve benefit from eliminating decision fatigue by drinking Soylent for certain meals and buying 10 identical copies of some articles of clothing.

You can download the entire deck at codefoster.com/deck/cww2017.

Introduction to Azure IoT (an MVA course)

One of my university professors once said that “Software is the most complex creation of man.”

I think I’m drawn to software development and to technology in general precisely because it’s complex. It’s a field I know I’ll never reach the extents of. It will never run up against boundaries with how creative I can be with technology, and I’ll never run out of new concepts to learn.

So that’s what I love to do - to be involved with learning and teaching technology. That’s why I usually say opportunities to present online learning courses.

In April 2017, presented a course on Microsoft Virtual Academy called Introduction to Azure IoT.

The course served to introduce curious viewers to IoT in general as well as to the broad offerings of Azure in the area of IoT, and it also served to introduce viewers to the more in-depth course on the same subject available on the edX platform. Jump over to Developing IoT Solutions Using Azure IoT (DEV225) on edX now.

You can download this PowerPoint deck to get a deeper sense of what was covered as well as to get a reference to the various external links that I used.

Here are the topics…

Hope this helps you ramp up on IoT!

Dynamic Bot Dialogs

I’m having a lot of fun developing against botbuilder - the Node.js SDK for the bot framework.

When you’re learning to make bots, you study and build a lot of simple bots that do very little. In this case, it makes good sense to simply define the bot’s dialogs in the same file where you do everything else - the file you may call server.js or app.js or index.js. But if you are working on a bot with enough complexity or bulk to the dialogs, you’ll want to settle on a pattern.

Encapsulated Dialogs

The first pattern embraced I learned from @pveller‘s excellent ecommerce-chatbot. In fact, I learned a lot of good patterns from this bot.

In the ecommerce-chatbot bot, Pavel breaks each dialog out into a separate JavaScript file and wraps them in a separate module. Then from the main page, he calls out to those modules, passing in the bot, and “wires up” the dialog to the bot within that separate module.

Notice in the following code that the main app.js file configures the dialog by requiring it and then calling the returned function passing in the bot object. That allows the dialog to use the bot internally (even though it’s a separate module) to call bot.dialog() and define the dialog functions.

//simplified from https://github.com/pveller/ecommerce-chatbot
//app.js
...
let showProductDialog = require('./app/dialogs/showProduct');
...
intents.matches('ShowProduct', '/showProduct');
...
showProductDialog(bot);
...
//sampledialog.js
module.exports = function (bot) {
bot.dialog('/showProduct', [
function(session,args,next) {
//waterfall function 1
},
function(session,args,next) {
//waterfall function 2
}
]);
}

The result is a much more concise app.js file and a bit of welcome encapsulation. The dialogs handle themselves and nothing more.

Dynamically Loaded Dialogs

Later, while I was working with Johnson and Johnson on a bot, I developed a pattern for dynamically loading dialogs based simply on a) their presence in the dialogs project folder and b) their conformation to a simple pattern.

To create a new dialog, then, here’s all I need to do…

module.exports = function (name, bot) {
bot.dialog(`/${name}`, [
function (session, args, next) {
session.endDialog(`${name} reached`);
}
]).triggerAction({matches:name});
};
```
The convention I need to follow is to define a module with a function that accepts both a name and a bot object.
That function then calls the `dialog()` method on the `bot` just like before, but it uses the name that's passed in as a) the dialog route and b) the trigger action. This means that if the dialog is called `greeting`, then it will be triggered whenever an action called `greeting` fires.
So far, this is a small advantage, but look at how I load this and the other dialogs...
``` js
getFileNames('./app/dialogs')
.map(file => Object.assign(file, { fx: require(file.path) }))
.forEach(dialog => dialog.fx(dialog.name, bot));

The getFileNames function is my own, but it simply reads the path you pass in recursively returning all .js files.

The .map() calls require on the path of each found file and adds the resulting export (in our case here the modules are exporting a function) to the array as a property called fx.

Finally, we call .forEach() on this and actually execute the function. This configures the dialog for our bot.

The overall result then is the ability to add dialogs to the bot without any wiring. You just create a new dialog, give it a filename that makes good sense in your application, and it should be loaded and ready to be targeted.

You may not get enough context from these snippets to implement this if that’s what you want to do, so check out a fuller sample in the botstarter repo that @danielegan is working on. The botstarter repo is designed to be a good starting point for creating bots.

A Tale of Two Gateways

Two Types of Gateways

There are two types of gateways in the IoT (Internet of Things) world.

The first is a field gateway. It’s called such because it resides in the “field” - that is it’s on location and not in the cloud. It’s in the factory or on the robot for instance. Microsoft has an open source codebase for field gateways called the Azure IoT Gateway SDK you can start with.

The second is a cloud gateway, and obviously that one is in the cloud. Microsoft has a codebase for one common cloud gateway function - protocol adaptation available at Azure IoT Protocol Gateway.

Both of these entities exist as a point of communication through which you direct your IoT traffic messages for various reasons.

You’ll also hear the term edge to refer to devices and gateways in the field. The edge is the part of an IoT solution that’s touching the actual things. In the internet of cows, it’s the device hanging on the cow’s collar. In an airliner, it’s all the stuff on the plane itself (which I realize is a confusing scenario since technically those devices may also be in a cloud).

Reasons to Use a Gateway

Some possible reasons gateways exist are…

  • you need to filter the data. It may be that qualifying data deserves the trip to the cloud, but the rest just needs to be archived to local mass storage or even completely ignored.

  • you need to aggregate the data. Your messages may be too granular, and what you really want to send to the cloud is a moving average, a batch of each 1000 messages, a batch of messages every hour, or something else.

  • you need to react to your data quickly. It doesn’t usually take that long to get to the cloud and back, but then again “long” is relative. If you’re trying to apply the brakes in a vehicle every millisecond counts.

  • you need to control costs. You can use filtering or aggregation to massage your messages before going to the cloud to reduce your costs, but there may be some other business logic you van apply to the same end.

  • you have some cross cutting concerns such as message logging, authorization, or security that a gateway can facilitate or enforce.

  • you need some additional capabilities. Devices that are not IP capable and able to encrypt messages are dependent on a field gateway to get any messages to the cloud. Devices that are able to speak securely to the cloud but are not for some reason capable to using one of the standard IoT protocols (HTTP, AMQP, or MQTT) require either a field gateway or a cloud gateway (such as Azure IoT Protocol Gateway).

Gateway Hardware

What kind of hardware might you end up using for a gateway? Well, the possibilities are very broad. It could be anything from a Raspberry Pi to a very expensive, dedicated gateway system.

Intel has a helpful article about field gateways and the hardware they offer. Dell has a product called the Edge Gateway 5000 that looks to me to be a pretty solid solution too.

Also, Azure maintains a big catalog of certified hardware including gateways that might be the most helpful resource.

Closing

There’s certainly a lot more about gateways to know, but I’ll leave this here now in case it helps you out.

TIL Something About Bot Middleware

PREAMBLE: I am trying to blog about the little things now. The idea is partly the reason why so many technical blogs exist - it’s a place for me to record things I’ll need to recall later. But modern search engines are good enough, that you just might make it to this blog post to answer a question that’s burning a hole in your brain right now and that’s awesome. I know I love it when I get a simple, concise, and sensible explanation of something I’m trying to figure out.

MORE PRE-RAMBLE: So, I’ve sort of drifted into bot territory. That is, I didn’t initially get extremely excited about the concept of chat bots. It seemed silly. I have since been convinced of their big business value and have really enjoyed learning how to embrace the Node.js SDK for Microsoft’s Bot Framework.

Recently, I realized that the very best way to learn about the SDK is not to search online for docs or posts, but to go straight to the source, and when you get there, look specificallly for the /core/lib/botbuilder.d.ts file.

That file is a treasure trove of useful comments directly decorating the methods, interfaces, and properties of your bot. It’s great that the bot is written in TypeScript, because that means this source code contains a lot of documenting types that not only made it easier for the team to developer this, but now make it easier for us to read it as well.

Tonight I was specifically wondering about something. I had seen middleware components for bots using property values of botbuilder and send, but then I saw receive and wondered what every possible property was and specifically what they did.

I discovered that in fact botbuilder, send, and receive are the only possible property values there. Let me drop that snippet of the source code here, so you can see how well documented those are…

/**
* Map of middleware hooks that can be registered in a call to __UniversalCallBot.use()__.
*/
interface IMiddlewareMap {
/** Called in series when an incoming event is received. */
receive?: IEventMiddleware|IEventMiddleware[];
/** Called in series before an outgoing event is sent. */
send?: IEventMiddleware|IEventMiddleware[];
/** Called in series once an incoming message has been bound to a session. Executed after [analyze](#analyze) middleware. */
botbuilder?: ICallSessionMiddleware|ICallSessionMiddleware[];
}

The IMiddlewareMap is an interface, which is a TypeScript concept. That’s not in raw JavaScript. TypeScript does interfaces right, because they’re not actually enforced on objects that implements them (we are, afterall, talking about JavaScript where pretty much nothing is enforced). Rahter, they’re an indication of intent - as in “I intend for my object to conform to the IMiddlewareMap interface.”

That means that at design time (when you’re typing the code in your IDE), you get good information back about whether what you’re typing lines up with what you said this object is expected to be.

So that’s just one little thing I learned tonight wrapped up with all kinds of preamble, pre-ramble, and other words. Hope it helps. Happy hacking.