Fetch Azure FTP Credentials from the CLI

This is one of those posts I’m writing for future me (hi, future me!).

If you have an Azure Web App and you want to get its application-level deployment credentials (as opposed to its user-level deployment credentials), you need to run two commands using the Azure CLI:

# to get the FTP endpoint
az webapp show -g <resource group> -n <app name> --query ftpPublishingUrl

# to get your credentials
az webapp deployment list-publishing-credentials -n <app name> -g <resource group> --query '{name:name, publishingUserName:publishingUserName, publishingPassword:publishingPassword}'

The second command will give you the three components you need for your username and password credentials. You use the name and publishingUserName together to make your username - like this ${name}\${publishingUserName} in JavaScript template literal syntax, and you use the publishingPassword as the password.

That’s that!

Semantic Selections and Why They Matter

I’m a big fan of at least two things in life: writing code and being productive. On matters involving both I get downright giddy, and one such matter is semantic selection. VS Code calls them smart selections, but whatever.

Semantic selection is overlooked by a lot of developers, but it’s a crying shame to overlook something so helpful. It’s like what comedian Brian Regan says about getting to the eye doctor.

Semantic selection is a way to select the text of your code using the keyboard. Instead of treating every character the same, however, semantic selection uses insights regarding the symbols and other code constructs to (usually) select what you intended to select with far fewer keystrokes.

I used to rely heavily on this feature provided by ReSharper way back when I used heavy IDEs and all ;)

Some may ask why you would depend on the keyboard for making selections when you could just grab the mouse and drag over your characters from start to finish? Well, it’s my strong opinion that going for the mouse in an IDE is pretty much always a compromise - of your values at least but usually of time as well. That journey from keyboard to mouse and back is like a trans-Pacific flight.

I can’t count the number of times I’ve watched over the shoulder as a developer carefully highlighted an entire line of text using the mouse and then jumped back to the keyboard to hit BACKSPACE. Learning the keyboard shortcut for deleting a line of text (CTRL+SHIFT+K in VS Code) can shave minutes off your day.

Without features like semantic selection, however, selecting text using the keyboard alone can be arduous. For a long time, we’ve had SHIFT to make selections and CTRL to jump by word, but it’s still time consuming to get your cursor to the start and then to the end of your intended selection.

To fully convey the value of semantic selection, imagine you’re cursor is just after the last n of the currentPerson symbol in this code…

function getPersonData() {
var result = requestData(currentPerson, 2);
}

Now imagine you want change currentPerson to currentContact. With smart selection in VS Code, you would…

  1. Tap the left arrow one time to get your cursor into the currentPerson symbol (instead of on the trailing comma)
  2. Hit SHIFT+ALT+RIGHT ARROW to expand the selection

Now, notice what got highlighted - just the “Person” part of currentPerson. Smart selection is smart enough to know how camel case variable names work and just grab that. Now simply typing the word Contact will give you what you wanted.

What if you wanted to replace the entire variable name from currentPerson to say nextContact? For that you would simple hit SHIFT+ALT+RIGHT ARROW one more time to expand the selection by one more step. Now the entire variable is highlighted.

One more time to get the entire function signature.

Once more to get the function call.

Once more to get the entire statement on that line and again to get the whole line (which is quite common).

Once more to get the function body.

Once more to get the entire function.

That’s awesome, right?

Integrate this into your routine and shave minutes instead of yaks.

Happy coding!

Beautiful Cascading Node Config

FYI, this is a repost. I lost this post’s source markdown and just recreated it for posterity.

I just learned something, which instantly makes it a good day.

I’ve been on the lookout for a really good pattern in Node.js projects to allow a user to…

  1. Define configuration variables in a JSON config file (not flat environment variables!)

  2. Allow them to override the configuration variables with command line arguments

  3. Make the command line arguments work like any good, modern CLI with double dash full names (i.e. --foo bar) or single dash aliases (i.e. -f bar)

  4. Let the dev know if their code isn’t working because a certain configuration variable hasn’t been set

Here’s what I have now.

let config = require('./arguments.json');
const commandLineArgs = require('command-line-args')

//override config with command line options
let args = [
{ name: 'argumentA', alias: 'a', required: true },
{ name: 'argumentB', alias: 'b', required: true },
{ name: 'argumentC', alias: 'c' }
];
config = { ...config, ...commandLineArgs(args) };

//throw errors if any required arguments are missing
args.filter(a => a.required && !config[a.name]).forEach(a => {
throw new Error(`A ${a.name} argument must be provided either in a device.json file or as a command line argument.`);
})

So, the arguments.json file that we bring in is a place we can define arguments permanently so they don’t have to be included on the command line.

Then we pull in a dependency on the command-line-args package. I was delighted to discover that this package works exactly like I expected it too. For each argument, we can give it a long name (i.e. argumentA) and a short name (i.e. a). Finally, I added the required property myself, which I’ll show you in a second.

Next, I coerce these two sources of configuration values using a spread operator. I talked a lot more about spread operators in my Level Up Your JavaScript Game! - Other ES6 Language Features post. The order of these two spread objects is such that it will take the values in my file first, but then override them with command line arguments if they exist.

Finally, I added another little trick that wasn’t built in to the command-line-args package (although I think it should be). I added the ability to make certain arguments required, and if not to throw an error so the user knows exactly why things don’t work.

That’s all!

You Already Have That Linux Command in Windows

I do a lot of work with Windows and Linux, and I have a little trick that I’ve shared 100 times IRL and finally decided to drop into a blog post for posterity.

So often, I’m working with groups of Windows developers trying to access Linux VM’s in Azure or Raspberry Pi’s running Raspbian and I ask them to ssh into the server.

Note: ssh is not just a tool, it’s a verb, and I concur with @shanselman who has declared that its correct pronunciation is much like the sound made by a downhill skier - a sort of “shoosh”. Now you know, so pass it on.

Unfortunately, many of those Windows developers commence to open PuTTY - a graphical tool for doing serial or terminal communication. If you’re opening a graphical tool for doing CLI work, there’s an inbalance in the force. Something is terribly wrong. You should be far more intimate with your system’s terminal or command line tool and that tool should allow you to ssh.

So how do you ssh from Windows? There are a number of ways, but if you have Git for Windows installed, you probably already can if you just do one simple thing.

Git for Windows installs by default into C:\Program Files\Git. If you look in that folder, you’ll find \usr\bin. And if you look in there, you’ll find a whole ton of Linux commands, and one of those commands is ssh.

If I remember correctly, these commands are actually the Cygwin Win32 ports of most of Linux’s commands.

So to start using all of those commands, all you have to do is add C:\Program Files\Git\usr\bin to your system path.

Method 1: edit the system environment variables

Go to Start and type “environment” and then choose to “Edit the system environment variables”. Then hit the Environment Variables button, find the Path variable in either your User or System Variables, and edit it to include C:\Program Files\Git\usr\bin. Now restart any terminals and type ssh to test.

Method 2: add the path in your profile

The method I actually use to get these commands into my path is a bit different. I add a command to my PowerShell profile (C:\Users\jerfost\OneDrive\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1), but if you use bash you could do the same thing in your .bashrc or .profile.

The advantage to doing this in my profile instead of using Windows’ UI for editing my path is that I can keep my profile saved in cloud storage so it persists across reinstalls of Windows. So I don’t have to remember to edit my path after I reload my computer.

To do this in PowerShell, go to your command line and use your editor of choice to edit your $profile. I would type code $profile to use Visual Studio Code to edit it.

Then add this line somewhere in there…

$env:Path += ";C:\Program Files\Git\usr\bin"

Again, test this by restarting your terminal and simply calling ssh. Now try scp and touch and ls. Yay! But ls already worked for you you say? That’s because PowerShell has a bunch of built in aliases, and ls is an alias for dir. So the functionality is similar, but not exactly the same.

There are a bunch of these aliases, in fact. You can see the full list here. I recommend adding the following lines to your PowerShell profile to remove these aliases and unlock the real (well, almost real) Linux commands…

If(Test-Path alias:curl) { Remove-Item -Path alias:curl } #remove alias that shadows use of real curl
If(Test-Path alias:rm) { Remove-Item -Path alias:rm } #remove alias that shadows use of real rm

Enjoy!

TypeScript for Documentation

TypeScript is wonderful for a variety of reasons and there’s one I want to hightlight right now.

TypeScript allows a developer or team to sprinkle types in to their codebase. These types make it much easier for your IDE to tell you you’re doing something you don’t intend to do in your logic. That’s excellent.

But the types also document your codebase well.

Here’s a function without types…

function sum(n1, n2) {
return n1 + n2;
}

And then with types…

function sum(n1:number,n2:number):number {
return n1 + n2;
}

And the obvious advantage is that at a glance, I as a human can see what kinds of variables this function is expecting and what it’s going to give me back.

Additionally, my IDE can inspect these types and give me some information about the expected parameter types without even make the journey to the source code to look…

I can take that a step further and add some comments to the source and get even better descriptions.

And now I can be more awesome.

A Simple Brain

I’m not (yet) an expert in machine learning, but like so many I recognize that it’s an incredibly integral part of our future.

Right now, most data insights are the result of some significant effort - a lot of data, a lot of training, and perhaps a significant amount of time spent by data scientists.

I anticipate that insights are going to come not only out of these large projects, but out of the small workflows as well. For example, your average web developer may tend to create some marketing information and some web forms online, but in the future, they’ll also apply some machine learning.

I have been dabbling with Brain.js lately, and I wonder if something like this might be a good compliment to some of the massive capability available in very high scale machine learning solutions. Brain.js is just JavaScript, and sometimes that’s all you need - something that will run in small quantity in the browser!

So I copied some code from the Brain.js website and regurgitated it here for your benefit. I also put it into my repo simple-brain.

Here’s the code…

const brain = require('brain.js');

var net = new brain.NeuralNetwork();

console.log("The following neural network implements an XOR (exclusive OR) logical operation, where the output will be true when the inputs are neither all false or all true.");
console.log("Training the model...");
net.train([
{ input: [0, 0], output: [0] },
{ input: [0, 1], output: [1] },
{ input: [1, 0], output: [1] },
{ input: [1, 1], output: [0] }
]);


console.log("Evaluating [1, 1] against the model (should approach 0)...");
console.log(net.run([1, 1]));

console.log("Evaluating [1, 0] against the model (should approach 1)...");
console.log(net.run([1, 0]));

I’m learning a lot about ML right now and it’s fun to have a new space to learn and explore. It’s fun too that ML has a bit of a presence in the JavaScript world even though the Python leaning is strong. I found this excellent article by Jonathan who’s as crazy as I am to consider JS for ML at this time.

Zoom in Windows

I do a lot of presentations. The only thing I like more than writing code is talking about writing code.

I’ve always had a little angst about zooming my screen though because…

  • The built-in Windows Magnifier seemed to get in my way

  • The excellent and popular Zoom It tool requires an extra install and I never seem to have it running when my presentation starts.

I recently discovered, though, a couple of options that make theh built-in Windows Magnifier work way better for me.

First, I looked up the keyboard shortcuts and beyond the basics of WIN + = for zooming in and WIN + - for zooming out, you can use WIN + ESC to exit the zoom altogether. Prior to discovering that, I thought it was necessary to zoom all the way back out. I also found that CTRL + ALT + Mouse Wheel works to zoom in and out with your mouse.

Now, my biggest aggravation with the Magnifier was how the UI rendered every time I performed a zoom. There’s a Magnifier settings to “Collapse to magnifying glass icon”. That’s better than the full window, but it’s still fairly obtrusive. So here’s the trick.

  1. Start Magnifier so the icon appears on the Task Bar.

  2. Now right click on the Magnifier icon, right click on the Magnifier entry in the context menu, and hit Properties.

  3. Now set Run to Minimized

That’s it. Now when you hit WIN + = to zoom in, the screen zooms in to the mouse cursor and doesn’t render any obtrusive UI.

It’s the simple things that light me up!

Quick note. A colleague just read and tried this and it didn’t seem to work for him. I tried it again and the behavior is not exactly like I indicated in this post. I spent some time trying to figure out exactly what’s up, and I figured out that if the magnifier is not running, then it will still create the UI. If you check the box I mentioned to earlier - Collapse to magnifying glass icon - the UI is smaller, but it’s still there. The application is not active, but if it happens to show up where your cursor is, then it’s still obstructive. So, I am going to submit this feedback to the Windows team to see if I can be an advocate of change. It does help if you move the Magnifier’s window to a remote area of the screen, and you can also just minimize it, but both of those take a little effort. Anyway, we’ll get this figured out :)

Okay, folks. I submitted this feedback via Windows Feedback and it was promoted to bug 16580796 within the hour. Impressed. Fingers crossed this is fixed in the next Windows update.

Level Up Your JavaScript Game! - Other ES6 Language Features

See Level Up Your JavaScript Game! for related content.

Sometimes it takes a while to learn new language features, because many are semantic improvements that aren’t absolutely necessary to get work done. Learning new features right away though is a great way to get ahead. Putting off learning new features leaves you lagging the crowd and constantly feeling like you’re catching up. I’ve noticed that junior developers often know more modern language features than senior developers.

There are quite a few language features that were introduced in ES5 and ES6, and you’d be well off to learn them all! Certainly, though, look into at least the ones I’m going to talk about here. I recommend you learn…

…to effectively use the object and array spread operators.

From MDN: “Spread syntax allows an iterable such as an array expression or string to be expanded in places where zero or more arguments (for function calls) or elements (for array literals) are expected, or an object expression to be expanded in places where zero or more key-value pairs (for object literals) are expected.”

The spread operator is an ellipsis (...), but don’t confuse it with the pre-existing rest operator (also an ellipsis). The rest operator is used in the argument list of a function definition. The spread operator on the other hand is used… well, I’ll show you.

Think of the spread operator’s function as breaking the elements of an array (or the properties of an object) out into a comma delimited list. So [1,2,3] becomes 1,2,3. The array spread operator is most helpful for either passing elements to a function call as arguments or constructing a new array. The object spread operator is most helpful for constructing or merging objects properties.

If you have an array of values, you can pass them to a function call as separate arguments like this…

myFunction(...[1,2,3]);
//equivalent to myFunction(1,2,3)

If you have two objects - A and B - and you want C to be a superset of the properties on A and B you do this…

let C = {...A, ...B};

Therefore…

{...{"name":"Sally"},...{"age":10}}
//{"name":"Sally", "age":10}

…to get into the habit of using destructuring where appropriate.

Destructuring looks like magic when you first see it. It’s not just a gimmick, though. It’s quite useful.

Destructuring allows you to assign variables (on the left hand side of the assignment operator (=)) using an object or array pattern. The assignment will use the pattern you provide to extract values out of an object or array and put them where you want them.

let {name,age} = {name:"Sally",age:10};
//name == "Sally"
//age == 10

That’s a lot better than the alternative…

let person = {name:"Sally",age:10};
let name = person.name;
let age = person.age;

It works with nested properties too…

let {name,address.zip:zip} = {name:"Sally",age:10,address:{city:"Seattle",zip:12345}};
//name == "Sally"
//zip == 12345

It works with arrays too…

let [first,,third] = ["apple","orange","banana","kiwi"]
// first == "apple"
// third == "banana"

Destructuring is handy when you’ve fetched an object or array and need to use a subset of it’s properties or elements. If your webservice call returns a huge object, destructuring will help you pull out just the parts you actually care about.

Destructuring is also handy when creating mixins - objects that you wish to sprinkle functionality into by adding certain properties or functions.

Destructuring is also handy when you’re manipulating array elements.

…to use template literals in most of your string compositions.

I recommend you get in the habit of defining string literals with the backtick (`) operator. These strings are called template literals and they do some great things for us.

First, they allow us to line wrap our string literal without using any extra operators. So as opposed to the existing method…

let pet = "{" +
"name:\"Jim\"," +
"type:\"dog\"," +
"age:8" +
"}"

…we can use…

let pet = `{
name:"Jim",
type:"dog",
age:8
}`

Elegant!

…to understand the nuances of lambda (=>) functions (aka fat-arrow functions).

And it looks like I’ve saved one of the best for last, because lambdas have so dramatically increased code concision. Not to overstate it, but lambda functions delight me.

I was introduced to lambda functions in C#. I distinctly remember one day in particular asking a fellow developer to explain what they are and when you would use one. I distinctly remember not getting it. Man, I’ve written a lot of lambda functions since then!

The main offering of the lambda is, in my opinion, the concision. Concise code is readible code, grokkable code, maintainable code.

They don’t replace standard functions or class methods, but they mostly replace anonymous functions in case you’re familiar with those. I very rarely use anonymous functions anymore. They’re great for those functions you end up passing around in JavaScript, because… well, JavaScript. You use them in scenarios like passing a callback to an asynchronous function.

Allow me to demonstrate how much more concise a lambda function is.

Here’s a call to that readFile function we were using in an earlier post. This code uses a pattern where functions are explicitly defined before being passed as callbacks. This is the most verbose pattern.

fs.readFile('myfile.txt', readFileCallback);

function readFileCallback(contents) {
//do something with the contents
}

Now let’s convert that function an anonymous function to save some lines of code. This is recommended unless of course you’re paid by the line of code.

fs.readFile('myfile.txt', function(contents) {
//do something with the contents
});

Notice that the function name went away. I for one strongly dislike the first pattern. When a callback function is only used once, I feel like it belongs inline with the function call. If of course, you’re reusing a function for a callback then that’s a different story.

Now let’s go big! Or small, rather. Let’s turn our anonymous function into a lambda.

fs.readFile('myfile.txt', txt => {
//do something with the txt
})

I love it! Notice, we were able to do away with the function keyword altogether and we specified it’s argument list (in this case only a single argument) on its own. Notice too that I called that argument txt. I could have, of course, kept the name contents, but I tend to use short (often only a single letter) arguments in lambda functions to amplify the brevity. Lambda functions are very rarely complex, so this works out well.

The loss of the function name and keyword saved some characters, but lambda functions get even shorter. If a lambda contains only a single expression, the curly braces can be dropped. The expression in this case becomes the return value of the lambda.

To illustrate, let me use a new example - this one from my post on arrays in this series…

let numbers = [1,2,3,4,5,6];
let smallNumbers = numbers.filter(n => n <= 3);

In this example, n => n <= 3 is a complete lambda function. I know, concise right?! This example illustrates the value of the single letter arguments and also introduces you to the expression syntax. The body of the lambda is n <= 3. That’s an expression. It’s not a statement such as…

let n = 3;

And it’s not a block of statements such as…

{
let m = 3;
let n = 4;
let o = 5;
}

…and like I said, when the body of your lambda is a simple express, you can drop the curly braces and the expression becomes your return value.

So in the example, the .filter() function wants a function which evaluates to true or false. Our expression n <= 3 does just that, and returns the result.

There are two caveats that I’ll draw out.

First, if you have 1 argument in your lambda function, you do not need parenthesis around the argument list. In our previous example, n => n <= 3 is a good example of that. If you have 0 arguments or more than 1 argument, however, you do. These are all valid…

() => console.log('go!') //0 arguments
x => x * x //1 argument
(a,b) => a + b //2 arguments
(prefix, firstName, lastName) => `${lastName}, ${prefix} ${firstName}` //3 arguments

If you use TypeScript, you may notice that the presence of a type on a single argument lambda function requires you to wrap it with parenthesis as well, such as (x:number) => x * x.

The second caveat is when your lambda returns an expression, but that expression is an object literal wrapped in curly braces ({}). In this case, the compiler confuses your intention to return an object with an intention to create a statement block.

This, then, is not valid…

let generatePerson = (first,last) => {name:`${first} ${last}`}

To direct the compiler just do what you always did in complex mathematical statements in high school - add some more parenthesis! We could correct this as so…

let generatePerson = (first,last) => ({name:`${first} ${last}`})

And there’s one more thing about lambdas that you should know. Lambdas have a feature to remediate a common problem in JavaScript anonymous functions - the dreaded this assignment.

Anonymous functions (and named functions) in JavaScript are Objects, and as such they have a this operator that references them. Lambda functions do not. If you use this in a lambda function, chances are the sun will keep shining and the object you intended to reference will be referenced. No more _this = this or that = this or whatever else you used to use everywhere.

That’ll do it for arrays, and in fact that’ll do it for this series. If you jumped here from a search, headback to Level Up Your JavaScript Game! to see the rest of the content.

Thanks for reading and happy hacking!

Level Up Your JavaScript Game! - ES6 Modules

This post is not yet finished

See Level Up Your JavaScript Game! for related content.

Unfortunately, the whole concept of modules in JavaScript has undergone a ton of evolution and competing standards, and for a while it seems like no two JavaScript environments used modules the same way. Spending a little time figuring out exactly what’s happening goes a long way toward demystifying things.

To level up in JavaScript modules, I recommend you learn…

…to transition from Node.js’s CommonJS modules to ES6 modules.

Node has not yet fully adopted ES6 modules, but it’s coming soon. We developers can today though using a transpiler, and I recommend it. We may as well get into tomorrow’s habits today. Instead of…

const myLib = require('myLib');

…use…

import { myLib } from 'myLib';

The former strategy - CommonJS - is a well-established habit for most of us, but it’s not inherantly as capable as the latter - ES6 modules. I’m going to assume you’ve used the CommonJS pattern plenty and skip explaining its nuances, and talk only about the newer, better, faster, stronger ES6 modules.

To play with some the concepts on this page, install TypeScript.

npm i -g typescript

…to define an ES6 module and export all or part of it.

CommonJS modules are defined largely by putting some JavaScript in a separate file and then requiring it. ES6 modules are too. The differences come in how a module describes what it exports - that is what it makes available to anyone who decides to depend on it.

In ES6 modules, you put export on anything you want to export. Period. That’s easy :)

//mymodule.ts
let x = "a thing";
export let y = "another thing";
let z = "yet another thing";

In the above example, only y would be available to whoever takes a dependency on mymodule.

You can put export on variable declarations (like the let above), classes, functions, interfaces (in TypeScript), and more. Read on to see how these various exports get imported.

…to import an entire module.

To import everything a given module has to offer - all of the exports…

import * as mymodule from './mymodule';
console.log(mymodule.y);

The * indicates that we want everything and the as mymodule aliases (or namespaces) everything as mymodule. After this import, we would be free to use mymodule.y in our calling code.

…to import parts of a module.

Let’s say our module looked like this…

//mymodule.ts
export let x = "a thing";
export let y = "another thing";
export function sum(a,b) {
return a + b;
};

If we decide in our calling code that we need x and we need the sum function, then we can use…

import { x, sum } from './mymodule'
console.log(x);
console.log(sum(10,10));

Notice that we don’t need to prefix the x and sum functions. They’re in our namespace.

…to alias modules on import.

Sometimes, you want to change the name of something you import - for instance, to avoid a naming conflict…

import { x, sum as add } from './mymodule'

That’ll do it for ES6 module imports. Now head back to Level Up Your JavaScript Game! or move on to my final topic on ES6 features.

Level Up Your JavaScript Game! - Regular Expressions

See Level Up Your JavaScript Game! for related content.

I’m sorry, but there’s no way around it. You have to master regular expressions.

Regular expressions (regex for short) have a reputation of being very difficult, but if you happen to be an entry-level developer, I really don’t want you to be intimidated by them. They’re actually not so difficult. They only look difficult once you’ve created one. In a sense, they’re an easy way to at least look like a ninja.

JavaScript’s implementation of regular expressions was tough for me at first because there are a few different ways to go about it. Spend some time writing and calling a couple of patterns though, and you’ll quickly master it.

To level up in JavaScript regular expressions, I recommend you learn…

…to write your regular expression.

That’s right, first you have to learn how to write a good regex pattern. I’m not going to go into detail, but if you want some help you’re a quick web search away. I highly recommend regexr.com. It’s good not only for learning the patterns, but testing them too.

In learning patterns, you should learn about capture groups too. Defining capture groups is simple - you just put parenthesis around certain parts of your pattern. Those parts of the pattern will then be available in your matchs as independent values.

Let’s say you wanted to pull the area code out of a phone number pattern. You could use a pattern like (\d{3})-\d{3}-\d{4}. That’s obviously a very simplistic pattern that would only match US-style, 10-digit phone numbers with dashes between the groups, but notice the parenthesis around the first group. That means that that part - the area code - is going to be made available as a value for you after you execute the regex.

…to quickly tell if a pattern is detected in some text.

If you don’t need the actual matchs of the regex execution, but just want to see if there’s a match, you use <pattern>.test(<text>). For example…

/\d{3}-\d{3}-\d{4}/.test('555-123-4567') //true

In JavaScript, you put regular expressions between slashes (/) just like you put strings between quotes.

…would return true.

…to use .exec() for single pattern matches with capture groups.

If you need not only to know that the pattern matched, but also to get values from the match such as the match itself and all of the capture group values, then you use .exec()

let match = /(\d{3})-\d{3}-\d{4}/.exec('555-123-4567');
match[0] //555-123-4567
match[1] //555

…and because I added parenthesis around the first number group there, that value should be returned as part of the match. The match itself is always the first match ([0]), and each subsequent capture group in the order you defined them from left to right follow ([1], [2], …, [n]).

…to use .match() to find multiple matches in a string.

The .match() function is on String.prototype, so it’s available on any string. Besides flipping the calling pattern from .exec() (.exec() uses <pattern>.exec(<text>) while .match() uses <text>.match(<pattern>)), this function has a couple of other peculiarities.

First, it does not capture from your capture groups, so if that’s what you’re looking to do, then use .exec().

Second, it is capable of capturing multiple matches returned as an array. So if you do something like…

"14 - 8 = 6".match(/\d+/g) //[14,8,6]

The g stands for global and is a regex option that tells it to look in the entire string. Look at all of the other options that are valid there too. They can be helpful.

If you need to capture multiple matches (like you get with .match()), but you also want the capture groups (like you get with .exec()), then you need to call .exec() in a loop like this…

let text = "The quick brown fox jumps over the lazy dog.";
let match;
while (match = /(t)he/ig.exec(text)) {
console.log(match[0]);
console.log(match[1]);
}

/* Should log...
The
T
the
t
*/

Note that I included an i and a g option on the regex (/the/). The i makes the search case insensitive and the g directs it to find every match in the text. Notice that match[0] equals the full match each iteration and match[1] is the contents of the capture group I defined (the first letter of the word “the” for whatever reason).

That’ll do it for regular expressions. Now head back to Level Up Your JavaScript Game! or move on to the next topic on ES6 module imports.