Menu

Colin Devroe

Reverse Engineer. Blogger.

Attending Venture Idol 2017 at Ben Franklin TechVentures

In 2007 I visited the area where Ben Franklin TechVentures is now. I was there for an interview with the then CEO of Viddler, Rob Sandie, to see about working there full time. At the time, Viddler was housed in Jordan Hall – a one-story building next to the now incredible Ben Franklin TechVentures complex. It wasn’t until many months later we moved Viddler out of the closet-like space in Jordan Hall and into the future-feeling building next door.

That memory pales in comparison to what exists there today.

I make mention of this fact because the feeling one gets when walking into Ben Franklin TechVentures is that the work that goes on in this building is new, exciting, and is the future of technology in our area. I personally want schools, libraries, incubators, and town squares to feel as though they are leading us into the future. Where the work that I do is raised to meet the expectation of the environment. I feel that Ben Franklin TechVentures does that.

In August when I presented at the local meet up I loved seeing the new wing being constructed. This month that wing was completed and this year’s Venture Idol 2017 event was in it.

But building’s aren’t everything. The community is of even more import. And the community is strong.

This year’s Venture Idol was the best attended year yet and, as Fred Beste (the emcee for the event) pointed out, everyone had a chair for the first time. He’s seen BFTV’s growth and he was as excited as I was to see where it is today.

The presentations by the three finalists were great. Mark Keith and I remarked how polished each presentation was. In my mind there were two presentations that were clearly the best; Channel Ape (from Scranton woot woot!) and Give Gab. Both had impressive results, tight presentations, and a roadmap that made sense. At the end of the night Channel Ape took home the victory.

Photo: Mike Averto, CEO Channel Ape, preparing for presentation.

Yes, it is a big of a jaunt from Scranton to Bethlehem. But it has been worth it every single time I’ve done it over the last 10+ years. I’m looking forward to seeing the sort of growth that region has been enjoying happening in our area too. It is only a matter of time.

Speaking at the 2017 tecBRIDGE Entreprenuerial Institute

Photo credit: Mandy Pennington on Twitter.

On Friday I had the privilege to host two sessions at the 2017 tecBRIDGE Entrepreneurial Institute Conference at Marywood University. The event was very well attended (I’d say nearly  200 people, but I don’t know for sure). The speakers and panels were engaging, interesting, and the number of people that remained until the last minute of the event was evidence of that.

My session was titled Social Media Metrics that Matter. I didn’t choose the title but I enjoyed the topic. The audience was mainly students focusing on being future business owners and also local businesses and organizations in our area. I can tell from the feedback that the subject matter was welcome.

The way I laid out my outline was to bring everyone in the room up-to-speed with common metrics that can be tracked on social networks. We spoke about how each of those metrics impacts the business, the content, the page. Then, we used a few example businesses to determine which of the metrics each of them should track and why.

It was a good exercise, even for me, and I hope those that attended each of my two sessions got something out of it.

A technology predication time capsule

Readers of my blog will know that I occasionally attempt to predict when certain technologies that I write about will hit the mainstream. While I’m very passionate about a few technologies, I try to temper that excitement with the experiences I’ve had, the wisdom that comes with age, and other factors. Usually, things take a little longer to happen than we’d like for the things we want to see most. And sometimes, sometimes, the things we want most never materialize at all.

For the purposes of this post, mainstream doesn’t mean critical mass but rather mass market adoption. With 7B+ people on the planet reaching critical mass is far easier than reaching mass market saturation. In other words, a company, product, or technology can reach sustainability and never truly hit the mass market. Examples: Tesla can succeed, be profitable, and have happy customers without the world moving on from fossil fuels. A company focusing on AI can make great livings, do compelling and challenging work, without every family having their own personal C-3PO.

Here are some stake-in-the-ground predictions on some of the most talked about technologies of our day. We’ll see in the next few decades if I was even close.

  • Legal, fully autonomous driving with no human assistance: Mid-summer 2026 – Even 9 years out there will still only be a few select vehicles that will fit into this category. There will still be humans driving on the road. And, only the most expensive cars will have all of these features. But, it will exist, be available to anyone, and be legal in the US. And I also believe there will be small fleets running in select cities for Lyft, Uber, and I believe Tesla will have a ride-share platform by this point. Also, don’t be surprised if Apple does too.
  • Bitcoin, or some crypto-currency, being widely transacted at small retail stores in the US: 2027 – If Square, or some other platform with high market saturation, turns on crypto for retail SMBs then we can say they accept this form of tender. But, I believe it will be 10 years before we see a decent number of daily transactions by consumers. I know, “decent” is relative so I’ll give it a number: $100,000,000 US dollar equivalent in a single month. This is roughly 25% of US monthly retail revenue as of September 2017. Side note: By this time we’ll see talk of the US dollar being converted to an all digital currency and, perhaps, transacted on its own blockchain.
  • Mixed Reality experiences used in everyday work environments: 2027 – Today we share links to web sites, documents on Google Drive, and flat or animated graphics to design and develop both soft and physical products. By 2025 many of these every day things will be accessible and even better experience within MR. I believe most businesses with digital assets will have multiple pairs of “glasses” or “goggles” that will allow team members to view or collaborate on these types of data. In other words, by 2030 rather than sending a child a link to Wikipedia to learn about our Solar System I believe we’ll be sending them MR experiences that they will consume using an augmented reality experience on a device other than a flat panel display. This happens today. But no where near mass market. And this industry has a long way to go. Even further than I previously thought.
  • Wireless internet takes over all cable based internet: 2029 – Most people in the US will connect to the Internet via wireless across all devices. And there will be no limitation on bandwidth usage.
  • Fully autonomous fleets replace individual car ownership: 2037 – Today US cities are plagued by traffic jams comprised of single occupant vehicles. Mass transit softens this but doesn’t solve the issue due to the convenience of a car. Ride sharing services have softened this even more and car ownership in urban areas is on the decline. By 2037 we’ll see massive reduction in individual car ownership in cities but also in the hinterlands as fleets of fully autonomous vehicles, combined with better mass transit, can care for the majority of transportation needs. I believe, however, families with at least 2 children will still have a single family-owned vehicle of some sort. Again, I’d like to put a number on this. So I’d say 15-25% less car ownership/use for individuals and commuters nationwide.
  • Mixed Reality replacing many conventional meat space locations/activities: 2050 – By 2050 the majority of children in the US will have the option to attend school in VR ala Ready Player One. Virtual classrooms will no longer be limited by federal budgets but will be designed to appear like cathedrals of learning.
  • (Because, why not?) An off-planet human civilization: 2175 – Humans will walk on Mars in the 2020s. And, perhaps, a small moon or Mars base will exist in similar fashion to today’s ISS in the 2030s. But a civilization, where people live, work, play, have children, and die peacefully etc. won’t exist on any other planet or moon (likely the Moon will have an established civilization prior to Mars). The reason I put this far-reaching prediction on this list is because I believe the excitement around a human footprint on Mars will lead to speculation about off-planet civilizations. But, we must all remember, we put a footprint on the Moon many, many decades ago and then just never went back. I do think that we’ll be mining objects near Earth much, much sooner. Even the Moon. But we’ll do that with robots and minimal human intervention.
  • Tweet editing – Never.

Check back in a few decades to see if I was even close.

Creating Summit: The current summit view

This post is the first in a series of posts about my experience building and designing Summit. This post focuses on just one view within the application; the current summit view.

The idea for Summit came nearly 4 years ago as far as I can tell. I’ve hunted around for scraps of paper, digital notes, code snippets to see if I can come up with an exact date but I’ve been unable to. And it has been fits and starts for several years.

When Kyle Ruane and I started on the idea we first thought the UI would be a bit more game-like. I envisioned a 3D model of the current mountain you were hiking that would progress the person up the summit in first-person towards each goal. This was altogether too much work, and far too difficult given my unfamiliarity with the platform. Kyle’s suggestion – again, many years ago – was to use a low poly look. He would craft a low poly representation of the summit and we could allow the user to move around in it, perhaps even spin it around, zoom in-and-out, etc.

I pulled that thread for a very short time before giving up. Remember, we started toying with the idea of Summit before Swift was released. So I was trying to draw this UI with Obj-C. Something I’m even more terrible at than Swift.

Here is what one attempt at drawing progress lines using Obj-C looked like back 4 years ago or so. I took this screenshot in June 2014 and was already labeling it “historical junk” in my files.

The red triangles were goals to meet, the blue line was your path, and the white line was your progress so far. My goal was to overlay this on top of the low poly art that Kyle drew. This was inspired by maps like this. (copied here for archival purposes)

This worked but was not that easy to pull of, introduced more complexity than we needed, and so we quickly shelved the idea until we got more familiar with the platform.

In tandem I began constructing a simple web UI to start cataloging steps from a phone. This was purely to get used to writing code that would track user’s steps, show stats, work on our step algorithm (the code that determines how far up Mount Everest a single step walking in a downtown city parking lot gets you), etc.

It went this way for a few years. I would open up a code editor and begin working on the pieces of Summit; the progress UI, the algorithm, the code to read from a user’s step count or HealthKit or Apple Watch.

In June 2017, when I picked up this project on my own to take on since Kyle had moved away, I decided I needed a simpler approach to the UI. In part because Kyle is the design genius but also in part because I wanted to get as quickly to shipping an app as I possibly could. I prefer to iterate on ideas with user feedback than to work on something in a silo for years. I wanted a way to show the summit, or some visual from the summit, but yet also show one’s progress. And I also still needed multiple goals per summit.

Here are a few drawings from this summer.

See, I’m not an artist. Admittedly, though, this wasn’t an attempt to draw anything beautiful but rather to get a general idea for all of the views I needed to pull off the layout. I needed some labels, some buttons, navigation, etc.

The long goal buttons was really “a punt” on my part. I gave up trying to get Xcode’s Storyboard feature to properly align a changing number of goal buttons (since each summit has a different number of goals) in a way that worked with each device size. It was very frustrating. So I began to go down this path of having them just be full-width, flat buttons.

But then I ran into Brian Voong on YouTube. In most of his video tutorials he suggests forgoing the Storyboard feature and using code to create the UI. Though I didn’t want to lose the progress I had made, I’m so glad that I took his advice. Writing UI directly in Swift is far, far easier (for me)  and seemingly more powerful than using Storyboards.

This revelation allowed me to go back to a drawing I did a month earlier. This one:

On the left, the elements needed, on the right, a rough sketch of a much more minimal and airy design of the current summit view. The goal buttons have varying distances between them relative to how far apart they are in real life (I’m still working on getting this right in the app).

Using Swift I was able to make this happen much easier than Storyboards.

The above is one of the very first swings at this view. It had all of the elements I wanted. And I’ve been iterating on this specific design ever since. I wish I had the hundreds of iterations saved but I don’t.

Here is what the most recent iteration looks like with goal buttons that are easier to determine your progress and other tweaks to make the UI more consistent.

This is the design for this view I’ve settled with for now. I have plans to iterate on this current design for some time before, perhaps, taking a whole new swing at it. Perhaps my skills will grow to the point that I feel confident going back to Kyle’s low poly idea. But, I’m pleased with how it has come along so far.

Attending October’s NEPA.js meet up

On Tuesday, October 10 I attended October’s NEPA.js meet up. John George of NEPA Web Solutions was this month’s presenter and his topic was Bitcoin and the Blockchain: Democratizing How We Exchange Value.

I believe all members of NEPA.js would agree, John’s presentation was arguably the best presentation the meet up group has had to-date. Though the Blockchain can seem a complex topic, John did an excellent job describing how it worked, where it is currently being used, and its future potential. Though the meet up was relatively well attended, I left wishing that so many more people had heard his presentation.

To further the lesson beyond the walls of the Scranton Enterprise Center, John also gave each attendee a gift in the form of a wallet containing a single bit of BTC. He also incentivized attendees to claim that bit for themselves by awarding the first few that did so with $50USD in BTC. Those that did it were rewarded indeed since the value of BTC has jumped to new record highs this month. Those that didn’t claim their bit may be kicking themselves for dragging their feet.

John will likely do this presentation again, in some form, under the NEPA Tech banner. Meet up’s like October’s are what is spurring the group to expand the group into a more general direction. This particular presentation had nothing to do with JavaScript – as the name NEPA.js would have you believe – and so we want to make sure each meet up is approachable by all that would be interested. You may remember me saying this over the last few months, and even in January I spelled it out specifically, but now there has been positive steps towards this happening. We’ll have more to announce in the near future.

Thanks to John for the amazing presentation, and for the bitcoin, and to the attendees for the active discussion.

Side note: My apologies for a terrible pano photo. I’ll try to do better next time.

Developers, Let me tell you about Microsoft (audio)

I’ve been writing about Microsoft’s moves for the last three years. This week everything has come together and I’ve been writing my first multi-platform application using C# and Visual Studio. In this long rant I go on and on about how Microsoft needs to spread the word about what they are up to.

Links for this bit:

Download.

My tips for new iOS 11 upgraders

I’ve been using the iOS 11 public betas on my iPhone and iPad for several releases and I think it is one of the most important updates to iOS. It brings lifesaving features to the iPhone and powerful features to the iPad.

Tomorrow iOs 11 is being released to the public, I thought I’d jot down a few things that I believe people should do on the day they upgrade, so that they don’t just move on with their busy lives and forget.

  • Turn on automatic Driving Mode detection. This setting could save your life and those of others. You have no excuse good enough to justify being able to text while you drive. iOS 11 does a good job of detecting when you are driving and turns off all notifications. Almost immediately when you exit your vehicle at your destination your messages are waiting for you. I love this setting. Settings > Do Not Disturb > Do Not Disturb While Driving.
  • Set up Driving Mode auto-replies. Optionally, you can set iOS 11 to automatically reply to certain people with messages that you’re driving. Or, you can keep this feature off and people will simply believe you have a life and cannot respond to every text message within 15 seconds of receiving one. Settings > Do Not Disturb > Auto-Reply To.
  • Customize Control Center. The control center (the screen you get when you flick up from the button of the screen, or from the top-right on the iPhone X) is very different than iOS 10. You can now add or remove buttons from it, and even customize their position on the screen. I’ve chosen to have Camera, Notes, and Voice Memos easily accessible in the bottom-right of the Control Center. I love it. Settings > Control Center > Customize Controls.
  • Identify faces in group photos. For those of you without a Mac, you’ve never had facial detection and naming capabilities for your photos. Now you can put a name to a face in iOS 11 and when your device is locked and plugged in it will rummage through your photos for you and find the vast majority of the other photos with that person in them. I’ve found that using large group photos is the quickest way to finding the most people. So, start off finding a few dozen group photos, naming everyone in them, and then let iOS 11 go to work at night. It is surprisingly good and getting better with every release. Photos > Find a Group Photo > Swipe Up > Click on person under People > Tap “Add Name” (repeat for all people in the photo).
  • On iPad: Customize your Dock. You can have up to 15 apps in your Dock on iPad. You can also add more by adding folders of apps. There is also an area on the right side of the dock that can show recent apps. Turn on Recent Apps in Settings > General. Otherwise, drag your favorite apps into the Dock.
  • On iPad: Practice multi-tasking, split-screen, and drag-and-drop gestures. iOS 10 has had split-screen features for iPad since it was released and I still see many iPad users that do not take advantage of them. iOS 11 makes these features even more powerful. Unless you make these part of your muscle memory by practicing them, you might be under-utilizing the power of your device. Watch this video on YouTube to see how best to open multiple apps, drag-and-drop files, and more.
  • Try out Notes’ new features. Notes has some new features that you will definitely find useful but you need to know they are there. Try some of the following:
    • If you have an iPad Pro with Apple Pencil, try tapping your Pencil on the lock screen. This results in a new note. Pretty slick.
    • Try the document scanner. iOS 11’s ARKit features allow for a pretty practical use of this technology in scanning a document and being able to sign it with ease. It is remarkably good. Put a document on a table, open Notes, in a new Note hit the + symbol, select Scan Documents. Prepare to be wowed. I wish this feature were part of the camera somehow or its own mode from Control Center. Again, here is a good video showing how this works.

By doing the above you may just save a life. But, also you’ll get far more use out of the device you already own and take full advantage of this monumental release of iOS.

If you have any others, feel free to leave them in the comment section below.

Attending September’s NEPA.js meetup

On September 12th, NEPA.js held its September meetup. Anthony Altieri presented on beacons – the typically small Bluetooth devices that “chirp” some very basic information multiple times per second allowing app developers to understand the proximity of a user. This allows for things like understanding where a shopper is in a retail space.

His overview of the devices, the spec, some of the software, and the differences between iOS and Android, and iBeacon and Eddystone – was a really nice introduction into the space. He did a great job.

I learned a lot during his presentation. Thanks to him for putting it together.

If you haven’t yet been to a NEPA.js and you live in our area – I implore you to check one out. It is consistently attended, always fun, and isn’t always focused solely on JavaScript. But even if it was, it is worth your time.

A unique color for every address in the world

A recent, yet-to-be-announced client project had me designing a mobile app interface that dealt a lot with showing locations and events that are happening at certain locations (how is that for vague? sorry).

While I utilized the brand’s colors to represent certain sections of the app I wanted the app to have tons of colors in order to portray a sense of fun throughout the app. But how could I incorporate pinks and yellows and bright greens without the overall brand disappearing?

After toying with a few design ideas I had an idea to create a unique color for every address in the world. This would result in two benefits; first, each location was then branded as a color, and second, every user would see that location as the same color. If I were a user of the app here in the US and I flew to Spain and looked at a location for an event  there, I would see the same exact colors representing that address as the person that lived in Spain and created that event.

Since I wasn’t to be the developer of the mobile application I wanted to avoid the possible pushback this idea might receive from that team. I didn’t want to add burden to the other people on the project by showing a design mockup and a set of requirements and then walking away. I wanted it to have zero overhead for the developers.

One of the solutions I discarded was generating a random color each time an event location was added to the service and then store the color for that address in a database. While this solution is relatively simple to implement it was no good. It adds more work for the developers and they have to maintain the datastore indefinitely. Several other ideas with the same caveats came to mind and I quickly tossed them into the bin.

Once I eliminated all of the ways I didn’t want to solve this problem – the solution came pretty quickly.

Since every address is already unique, I just needed to find a way to represent an address that could be turned into a color. In other words, I wanted the address itself to represent a unique color. And I wanted to do it in realtime as the application’s UI loaded.

So I jumped into JavaScript and began working it out. Here is what I settled on:

This solution allows for just over 16.5 million colors. Far more than this app will likely require during its lifespan.

Here is a demo of the process and if you view the source you can see the code at work. It is fairly simple to follow.

Oh, there was an issue that I ran into with this solution that was fun to solve. If the background color that was generated was too dark the text became hard to read. So digging around I found a way to determine the luminosity of the background color and thus change the text to something a bit lighter in those instances. That too is shown in the demo.

I was then able to repurpose this demo code and give production-ready code to the developer that is going to ship in the app. When that ships I’ll write more about it.

Colin Walker: “Should replies be posts?”

Colin Walker, in a post on whether or not replies to other posts (or, comments) should be their own posts:

There has to be a line, a point where a comment is just that and not a reply. It’s a question of semantics but not everyone’s answer to “what is a comment and where does it belong?” will be the same.

I struggle with this a lot.

It is likely the point I should have made in my post regarding Micro.blog becoming a commenting service (and the fact that I don’t like that). I don’t want to reply on my blog to every reply to my posts on M.b because then I would have dozens and dozens of posts on my blog that would be very tough for readers to follow contextually. I believe the commenting mechanism that has been around for decades, even un-threaded, is far more useful than dozens of desperate posts stitched together loosely with a link that says “in reply to”.

Webmention attempts to bridge that gap between post and reply but that also is tough to follow along if the thread gets unwieldy.

However, I also don’t want to reply to every reply on my posts directly on M.b either (though, I do from time-to-time) as that isn’t much better than using any other silo like Twitter or Facebook. Should M.b go away, all of those conversations would be lost.

This isn’t a new issue nor is it exclusive to M.b. If I replied on my own blog to other people’s posts on their own blogs (like I am in this post to Colin Walker’s blog) then one side of the conversation could disappear at any time. I can only control my side of the equation. But at least if I have my own blog I have control of that one side.

I think it is good that these topics are being discussed again. The same debates have been swirling since blogging began, they swelled again when the indieweb movement began to take shape, and I think they are happening again as a result of M.b’s growing community. I do not believe there is one single answer to many them. You have to do what is right and sustainable for you.

For now, here are my personal rules for replying to posts. These will most definitely change over time.

  • If I want to say a quick “congrats” or “excellent post” or something of that nature I will leave a reply directly on their blog. If they do not have commenting turned on I will attempt to email. If they do not have email publicly available I’ll say nothing at all.
  • If I have something substantive to add to the conversation, or if I would like my “followers” to see the post I will quote the post on my blog with my additions to the conversation. Like this post.
  • If I simply want to direct people to the content I will use my new repost tag that I’ve been experimenting with. I’ve seen others use the “a post I liked” type post. That could work too.
  • If people reply using M.b, Twitter, or Facebook I will not reply on those services*. But I may reply on my own blog.
  • If I would like to keep my reply private I will attempt to email.

As an aside: I know some of you do not want to leave a public comment. I love getting reader emails. I get a fair number of them. And some of them have been excellent conversations. So please don’t hesitate.

* I no longer have a Twitter or Facebook account. I do have a M.b account but I’m beginning to wonder if I need one as I have my own fully functional weblog. If I didn’t and I wanted a microblog and didn’t want to use Twitter, I could see having an account. If I wanted a more fully featured blog I still believe WordPress is the best tool for that. Also, I’m sure as the M.b community grows it could mean that my content would be discovered by more people. I think M.b may end up being a thriving, well run, community and service. It is why I backed Manton’s efforts via Kickstarter. But, if I have my own blog, and if I really don’t care much about my content being discovered, then I see little reason to syndicate to it. For the time being I’m still going to as I want to see how the service matures.

Presenting at the August 2017 Lehigh Valley Tech Meetup

The Lehigh Valley Tech Meetup is an excellent community in the Lehigh Valley that meets monthly at the Ben Franklin Technology Partners incubator within the Lehigh University Mountaintop campus. The community around the meetup is excellent and the building is amazing*.

While the tail-end of my presentation walked through my experience building my first iOS app Summit, the majority of my presentation was focused on helping early stage companies think about their go to market strategies.

I’m currently advising several companies, a few of which are businesses built around mobile apps, and have heard about 11 other start-up pitches this year so far. And during that time I’ve noticed a trend. Entrepreneurs that are attempting to build a business around an app sometimes underestimate the amount of thought that should go into the marketing and sales strategy for the app. It is as if some feel that apps are less thought and work than products that you can touch. So during my presentation at LVTech I hoped to convey that the same “boring” (yet, tried and true) business practices that apply to products also apply to software.

A few questions I urged those thinking about building a business around an app were:

  • Does your idea service a large enough segment of the market? We hear the “scratch your own itch” mantra a lot. However, it won’t always lead to finding hundreds, thousands, or tens of thousands of customers.
  • How will you reach those customers?
  • Are there ways to expand your idea into other products or services that can be sold to the same segment?
  • How will you sell or package your idea?
  • What will the price be? (free, one-time payment, subscription, service contracts)
  • What channels can you leverage to sell your idea? (App Store, retail, online, conferences, distributorships, via a sales force)

By considering these, and may other questions, you can determine if your idea has enough layers to support an entire business or if you just have an app idea**.

I also briefly discussed three misconceptions I’ve been seeing over the last year dealing with very early stage start-ups. These misconceptions were:

  • Press-based launch strategies: some thing that by being covered by press will be enough to get them to profitability. They have no other strategy. On the contrary, getting press coverage early on will give you very muddy analytics which will make decision marking very difficult. Very seldom are the tech audience your real customers.
  • How long until profitablilty: More and more entrepreneurs begin with the plan of losing money for 3 or more years. I believe this stems from press coverage of other companies getting large rounds of funding. Most businesses should strive for profitability within the first quarter or year of business.
  • ”I’m not technical, I need a technical co-founder”: Don’t be this person. Anyone can learn to code. Geeks are not smarter than you. They’re just interested and relentless. Be the same.

We then did about 10 minutes or so of questions and answers. The questions I got were really great and I appreciate all those in attendance helping me with the answers to the questions I didn’t have much experience in.

Thanks to Tim Lytle for the invitation to speak and to Ben Franklin Technology Partners for the continued support.

* I worked in this same building for years while at Viddler. But when I worked there the back half of the building didn’t exist. In fact, Viddler started in Jordan Hall – the building just beside the new building. And now, they are extending it even further. The building is an amazing place to work and have a meetup of this kind. I’m jealous that our incubator in Scranton feels so dated when compared to this building. Especially comparing the meeting spaces.

** It it totally fine to “just have an app idea”. I do. And I’m loving working on it. But it is also good to have the proper perspective about your app idea.

Summit – The Adventurous Step Counter

This evening, at a presentation at the Lehigh Valley Tech Meetup, I’m opening up public beta access to my new iOS app, Summit – The Adventurous Step Counter.

I’ve stitched together a temporary web site for the app as well as a mailing list that will allow you to get access to the final few beta builds prior to public release. If you have an iPhone please consider signing up and giving it a spin. I’d be very grateful for your feedback.

Thanks to the 13 private beta testers who have already tested the app and provided feedback. You can expect a brand-new build of the app coming in September.

What is Summit?

Summit is a free, iOS-only app that uses your step count to virtually hike up tall peaks like Mount Everest in Nepal, learn about amazing landmarks like Diamond Head in Oahu, and even take a leisurely stroll down famous streets like Lombard Street in San Francisco. As you make progress on your journey you’re provided new information at each goal.

At the time of public release there will be 5 summits and new summits will be added each month thereafter.

Here are some screenshots of the app as it is currently:

When I started on Summit I did not know how to develop an iOS app. It has been really fun to learn Swift, Xcode, iTunes Connect and Test Flight, and the myriad other things I was able to learn in order to get this app as far as I have.

I still have a bit of work to do, but I’d love your feedback along the way as I finish the app up for release.

My personal blogging tips

I’ve been writing things down on my own blog for a few decades. I wish more people did too. If you’d like to have a personal blog but struggle finding things to write about, here are a few tips that may help.

  • Don’t post about what you will do, post about what you’ve already done – In other words, I try to avoid the “I should blog more” posts and just get on with blogging more. Also, I like posting photos and status messages sometime after they’ve happened.
  • Find a theme – Niche blogs do extremely well. So stay on topic. Personal blogs do less well but they should still have a theme and that theme should be you.
  • Create reasons to post – My What I saw series and observations series give me a reason to write. Should I feel writer’s block I can fall back to one of the series.
  • Have a schedule – I try to post one or two posts per day prior to 9am. Some are scheduled in advance some aren’t. Everything else that happens is completely random.
  • Be totally fine with missing the schedule – Sometimes I don’t blog for a few days or weeks due to time off away from the computer or just being focused on something else. And I’m totally ok with that.
  • Don’t post test posts – Create a staging or a local development environment to test your site’s features. It is really easy to do.
  • Try not to care about stats – Stats are useful for a number of reasons but obsessing over them won’t help you at all. Check them once a month to see how you’re doing.
  • Create an inspiration list – In your notebook or notes app write down some topics you’d like to write about someday. Make it long. Like, 50 items. Don’t worry too much about what should be on it just start writing the list down. When you can’t think of anything to write about look at that list and simply pick any one at all and check it off.
  • Subscribe to a bunch of blogs that interest you – More than likely the conversations started by others will give you more than enough to write about.
  • Perfect is the enemy of good – Just hit publish.
  • Have fun! – I’ve thoroughly enjoyed blogging all these years and I don’t imagine I’ll be stopping any time soon.

If you have a neglected blog or are just starting one – jump in! Oh, and don’t forget to email me the URL.

Attending the August NEPA.js Meet up

The NEPA.js Meet up is really hitting its stride. Each meet up is pretty well supported – even in the summer – and the camaraderie and general feeling around each event is pretty great. Also, the Slack channel is pretty active.

If you’re within an hour or so of Scranton I’d recommend joining the meet up group, jumping into the Slack channels from time-to-time, and attending at least a few events per year. If you need help with any of these things send me an email.

Also, within the past few weeks we’ve seen a new group spin out of the NEPA.js group. A more general meet-and-work-on-stuff type of group created by Den Temple. This event fills the gaps for when there isn’t a NEPA.js group event.

This month’s presentation was by Ted Mielczarek. Ted works at Mozilla on build and automation tools for Mozilla’s primary product Firefox. He has, though, dabbled in a variety of other things at Mozilla like crash reporting and the gamepad web API. It was his experience building this API that spurred this month’s topic; Web APIs.

I remember jumping onto the web in the 90s and being blown away when I was able to put animated GIFs of X-Wing fighters on my personal Star Wars fan page. Today, web browsers support a variety of Web APIs that make the open web a true software development platform. There are APIs to control audio and video, to connect to MIDI-enabled devices, to connect to Bluetooth, VR and – of course – to allow for game controller input. There are lots of others too.

Ted did a great job showing demos of many of these APIs. Just enough for us to get the idea that the web has matured into a powerful platform upon which just about anything can be made.

Thanks to Ted for the work he put into creating the presentation and to all the attendees for helping the NEPA.js community thrive.

Following Twitter accounts via RSS

I haven’t missed Twitter that much since deleting my account. The first week or two I missed Moments – but once that subsided I realized that Moments are generally a waste of time. Realtime reporting of most newsworthy events result in ill-informed, unsubstantiated tweets. I’m at a point now where I’d much prefer to get the real story after-the-fact rather than realtime.

There are instances where realtime reporting can be incredibly useful, such as when there is a fire, a traffic accident, or a natural disaster happening. Those tweets can save lives. But, in general, I’m perfectly OK with reading up on the news once or twice daily to see what really happened.

I do miss certain Twitter accounts. Especially those that do not have a blog or web site counterpart that I can follow along through another medium. And since Twitter is still web and developer hostile (meaning their API is far too limited and they don’t support open web distribution technologies like RSS) I’ve missed out on a lot of great content from those Twitter accounts.

So today I went searching around for some RSS feed generators that would use what little access to Twitter they have (presumably the limited API or HTML scraping or both) to create an RSS feed from accounts or hashtags or lists. And there are a number of services out there, some of which you have to pay for, others that toss in some ads, or others that are severely limited.

Then I found Publicate. I’m using Publicate’s Twitter RSS Feed Generator to create a few feeds based on some Twitter accounts I miss the most. You simply type in the URL you want to create a feed from, give them your email address*, and they provide a feed URL. So far it seems to be working. I’ve created a new collection in Feedly to store these feeds. Hopefully I’ll get the tweets I wanted to see most and I won’t have to deal with the drivel and hate I’ve seen on Twitter over the last 18 months. Or even Twitter itself!

* I certainly don’t mind my email address being a form of payment to a company. So I gave it to them. But, if you’re a bit of a hacker it is quite easy to dismiss the overlay, read the page’s source, and grab the feed URL without giving Publicate your email address. I want this tool to stick around so if my email address helps them to keep it up-and-running so be it.

Snapchat is a party, LinkedIn is a business lunch

Colin Walker, like me, struggles with what should be syndicated to networks and what should be brought back into the blog context. He makes this specific point about replies:

Social replies like on Twitter or Facebook don’t, in my opinion, need to be owned – they belong in the context of the social network and that particular conversation.

I suggest reading his entire post so that you get a clearer picture of his struggle.

As you may know I’ve decided to leave social networking altogether and so I don’t have this struggle any more. However, one analogy came to mind when I was reading Colin’s post.

When Snapchat arrived on the scene many in the blogosphere thought it was crazy to have such an ephemeral medium sucking up so much oxygen. I didn’t see it that way. Perhaps I didn’t love Snapchat but I didn’t see it as bad simply because you couldn’t save what you posted there. It reminded me of going to a local pub. If you drop in at a pub for a pint and rattle off some diatribe about your favorite sports team to the other pub-goers – does that really need to be saved somewhere? If I’m having a random conversation about a movie I saw recently while sitting around a campfire with a friend, does that belong in the Internet Archive?

If we view each site on the web as a real physical place then we begin to realize that some places are museums, some libraries, others local pubs, and still others are rowdy nightclubs. Each have their place to make up the human existence but not all need to be saved or syndicated or shared.

I simply do not view Facebook and LinkedIn and Twitter and Snapchat and Instagram the same as I do my blog. So I do not believe that all of the content that I post here should end up there and vice versa. Some things deserve to disappear. And there is a certain beauty in that. The same way I enjoy a good local pub rant.

Colin’s struggle is real – it isn’t easy to choose what gets saved and what doesn’t. What should go to one network and not another. Especially in the moment it is very difficult to know. And, it is complex for a single person to maintain that connective technology to allow that to happen in the first place.

I don’t envy his position. I don’t know what I would do if I were him. But, for me, not being on any social media currently has made my decision very easy. What I share here stays here. Everything else you’ll never see. And I’m totally cool with that.

Observations on using the iOS 11 Public Beta

The iOS 11 Public Beta is the first beta OS I’ve installed from Apple. I did so in part because I want to help improve the OS by providing feedback and analytic data, but also because I wanted to test my aforementioned app that I’m building, and lastly I’ve wanted driving mode since very early iOS days.

I waited until the second developer beta (which was the first public beta I believe) was released before I updated my iPad. And I waited until the next developer release (or, second public beta release) before I updated my iPhone. I waited in hopes that there would be a great enough improvement in these builds that I didn’t have to worry too much about my iPad or iPhone not working at all.

I thought I’d jot down some observations during my use:

  • So far the “biggest” problem I had was charging my iPad. During the first public beta the only way I was able to charge my iPad was by first plugging the lightning cable into the iPad first and then plugging that cable into a power outlet. Weird, I know. But the next public beta has seemingly fixed that.
  • While there are minor UI niggles that could be easily pointed out, I’m going to refrain since they seem to be cleaning up the loose ends very quickly. This last public beta build fixed a slew of issues.
  • Driving mode is beginning to work very, very well. I’ve had trouble starting a song via Siri via Apple Music after a podcast episode in Overcast is finished playing – but perhaps that will get fixed in an upcoming release. Overall, this feature is going to be a lifesaver.
  • The style and controls aesthetic are much better in my opinion. Previous releases of iOS attempted to be too “elegant” (unsure if this is the term I’m looking for) by being overly thin and translucent. This latest release of iOS brings some sanity to the UI. Also, as I get older I’m beginning to appreciate the larger text sizes throughout.
  • The new App Store should prove to be a huge improvement over the previous versions. It remains to be seen whether or not Apple’s team will keep up with the editorial (since they’ve yet to update any content in there) but I’m hoping they’ll do this part great when the time comes.
  • Though I use iCloud Drive, Dropbox, and other file sharing platforms I’ve not put the Files app to the test just yet. Perhaps I don’t see the need for it as much as others will. I’ll report back after I’ve used it more.
  • The Notes app is incredibly good at this point. I switched to it from Simplenote and I’m loving it.
  • iOS 11 shines on the iPad.
  • The new keyboard on the iPad is particularly cool. You essentially pull down slightly on a key as you type if you’d like the letter you’d usually get by holding down the shift key modifier. Great idea.
  • Oddly enough, the new multitasking capabilities on iPad don’t work as well yet for me as the old way. I’m sure I’ll figure it out and get used to it but the “dock” and dragging icons out of it, etc. does not work for me very well. It could also be that apps haven’t yet been released with support for that feature.
  • iOS 11 has “broken” a ton of my apps. Not beyond usability but I’m guessing that developers are scrambling to get new iOS 11 builds ready. Some of the oddities could be very difficult to fix.
  • coreML and ARKit are incredibly cool.

While I don’t yet recommend updating to iOS 11 Public Beta for most people – if you’re willing to deal with a few hiccups the driving mode feature may save your life. I can’t imagine going another day with out it. Apple can not get this version of iOS out soon enough in my opinion.

Presenting at the July NEPA.js Meetup

Earlier this week my Condron Media cohort Tucker Hottes and I presented at the July NEPA.js Meetup. Our presentation was about automation and all of the things we can automate in our lives personally and professionally. And also how we employ automation in our workflows for creating applications and web sites using our own task management suite.

Here are just a few examples of reproducible tasks that you can automate that perhaps you haven’t thought about:

  • Your home’s temperature
  • Applying filters to multiple photos at once
  • Social media posts
  • Combining many files together into one
  • Deleting unused files
  • Calendar events

There are countless others. Perhaps you’re doing some of these things now. You might set a reminder for yourself to clean the bathroom every Tuesday. Or, your using a Nest to control your home’s temperature based on your preferences.

But there may be others that you’re not doing. Posting regularly to social media can seem daunting to some. But automating those posts can make it much easier to set aside time to schedule the posts and then go about your day. Or editing photos or video may never happen because you don’t have time to go through them all and edit each one individually. But these are tasks that can be automated.

We showed a quick demonstration of automating the combining of multiple text files using Grunt. There are a lot of ways something like this can be useful. Combining multiple comma-separated value (CSV) files that are reports from many retail locations, web development, and others.

Then Tucker provided a list of all the tasks we do when we get a new client at Condron Media. The full list can take a person up to 1.5 hours to “start” working on that customer’s project. So we’ve begun working whittling away at that list of tasks by using another task manager called Gulp. We call this suite of automation tasks Bebop – after one of the thugs from Teenaged Mutant Ninja Turtles.

Bebop is separated into the smallest tasks possible so that we can combine those tasks into procedures. Creating new folders, adding Slack channels, sending Slack messages, spinning up an instance of WordPress, adding virtual hosts to local development environments, etc. etc. Bebop can then combine these tasks in any order and do them much quicker than a human can clicking with a mouse. We estimate it will take 1 minute to do what took 1.5 hours once Bebop is complete.

Another benefit of automating these types of tasks is that you can nearly eliminate human error. What if someone types in the wrong client name or forgets a step in the process? Bebop doesn’t get things wrong. Which saves us a lot of headaches.

Here is the example Gulp task that we created to demo Bebop to the NEPA.js group.

We then asked the group to take 5 minutes and write down what they would like to automate in their lives. The answers ranged from making dog food to laundry to simple development and environmental tasks. Every one in attendance shared at least one thing they’d like to automate.

Tucker and I had a blast presenting but we enjoyed this final session the most. Similar to my event suggestions to Karla Porter earlier this year, I find that the more a group interacts with one another the more I personally get out of a meetup or conference. Presentations can be eye opening but personal connections and calm discussions yield much fruit for thought.

Thanks to everyone that showed up. I think we had 14 or 15 people. The NEPA.js community is active, engaged, and I’m very happy that it is happening in Scranton.

Observations on building my first iOS app in Swift

In early June I decided I wanted to learn iOS app development using Swift.

I’ve made a lot of progress over the last month, building two apps that I can use on my own phone, and one app that I’m now in beta testing via TestFlight with a few friends. Over the last month I’ve made some observations on the process of building an iOS app, the Swift programming language, Xcode, iOS frameworks, and the various other bits needed to make an app. I thought I’d take the time to jot those down.

These are in no particular order:

  • Swift is growing on me rather quickly. The idea behind Swift has always interested me, but I hadn’t really given it a try until now. Like any new language you need to work with it for a time before some of the things that you may not like about it, you end up seeing the wisdom in.
  • I’m very glad I waited until Swift 3 before trying it in earnest. The tutorials I’ve come across for earlier versions make it clear the language has matured in a short period of time.
  • Using Storyboards in Xcode is not intuitive whatsoever. I know many people avoid them altogether (from what I’ve seen on YouTube). Unless you watch someone build a Storyboard you’d likely never, ever just figure it out.
  • iOS frameworks are bulky. It is no wonder so many apps are so big. Just including one or two frameworks for my very simple first app ballooned the app to over 15Mb.
  • That being said, iOS frameworks are very useful. With just a few lines of code you can get something working quickly.
  • Playgrounds are very useful to learn Swift.
  • The Playgrounds compiler can become stuck rather easily. Especially if you paste in a bunch of code from your project to mess around with and get it to work. I’ve had to restart Xcode several times.
  • Xcode has crashed on me a few times over the last month. Crashes on macOS (and also most Apple apps) are very rare. So to be working on something so fragile seems out-of-character. Especially with how simple my apps are currently.
  • Auto Layout baffles me still. I have a working UI for one of my apps that works across multiple device screen sizes. But it is far from what I’d want to ship with. I’ve watched a lot of videos on how to use Auto Layout but I still can’t make heads or tails of it. I’m waiting for the moment it clicks.
  • The connection between labels and buttons and other UI elements in your Storyboard and your Controller class is far too fragile. You should be able to rename things, delete things, move them around without completely blowing everything up and starting over. Example: If I CNTRL+Drag a label onto my Controller and create an Reference Outlet for it… I should be able to rename that Outlet without needing to CNTRL+Drag again. I don’t know how, but somehow.
  • Did I mention that Auto Layout baffles me still?
  • Building and deploying an app to iTunes Connect in order to add to the App Store or Test Flight is an entirely un-Apple-like experience. There is no Step 1, Step 2, Step 3 type of workflow. Similar to Storyboards it is not something you can figure out – you must watch or read to learn. It feels like it was never designed by a Product person.
  • Building an app that resides on a device like the iPhone is an amazing experience. While I’ve always been able to load my web apps on a phone, and I’ve built some apps that use a WebView to deploy across multiple platforms, this is the first time I feel like I’m touching my app when I use it. There is nothing that comes close to native UI.
  • Also, building an app that requires no connection to the web has been really fun. It is so fast! I’d like to move forward by trying my best to keep HTTP request at zero or as low as possible.
  • The amount of information an iOS device knows at any given time is pretty amazing. It can know (with the user’s permission) where it is, what altitude it is at, which way it is pointing, how many times the person’s heartbeat that day, what it is looking at, etc. etc. Amazing to play with these features.
  • The Xcode IDE is really incredible to use. You may not remember a framework’s properties but you can just begin typing a reasonable word and expect that Xcode will figure out what you’re trying to accomplish. Also, if you happen to write older syntax because you’re following an out-of-date tutorial, it will automatically convert it to the most recent syntax.

Overall I’ve had a positive experience learning to build an iOS app on my own. Going from having an app in TestFlight to shipping an app feels like preparing to cross a desert on foot. But, I’m enjoying my experience so I’m going to trudge forward to do so.

I hope to ask for public beta testers of the app in a few weeks or a month.

Observations on Apple Music

I switched from a paid Spotify account to a paid Apple Music family plan earlier this month. Since doing so I’ve used the service nearly every single day via my Mac desktop, my iPhone, and my iPad. I’ve created playlists, downloaded tracks, loved and disliked albums, followed artists, used Siri’s built-in “What’s this song?” feature, and more. So I think I have had enough experience to jot down some observations of the service so far.

If you work at Apple and are reading this right now; first, thanks for listening to your customers. Second, you’ve doubled-down on Apple Music once in the past when you realized it wasn’t good enough. I hope you do so again because the service could be excellent.

Here are my observations in no particular order. They are mostly negative, not because I dislike Apple Music overall but because the things I make note of the most are the things I expect to work well that haven’t. And at the moment, I’d say that Spotify is “better” than Apple Music. That said, I see a lot of potential in Apple Music becoming incredibly good.

  • Apple Music via iTunes is incredibly slow, difficult to navigate, and feels like it is on the cusp of being replaced. Slow may not even be the right word, because there are times when choosing an artist in Apple Music’s search results in a blank screen in iTunes. It happens regularly. This leads me to believe that Apple will break apart the monstrosity that is iTunes sometime in the very near future and give us an Apple Music app.
  • Apple Music via iOS is quick, easy to use, and even fun to play with. I’d wager the vast majority (like, 95%) of Apple Music’s use is on iOS. If it were not, they already would have given us the standalone Apple Music Mac app.
  • Song history sometimes doesn’t sync between devices. I listened to a song that was suggested by Apple’s “For You” algorithms at work in iTunes on my Mac. Later that evening I wanted Eliza to hear it so I looked it up while I was at home on my iPhone and I couldn’t find the song. I know I listened to it. But the history didn’t have that song in it. I fired up iTunes, looked through my history, and sure enough it was there.
  • Music streaming overall is good quality and quick. Via iTunes songs can take a few seconds to begin but on iOS they seemingly start instantly.
  • Trying to watch my friend Gary’s new Apple Music show Planet of the Apps wasn’t very easy. I couldn’t stream it to my Apple TV from my iPad (unsure why) and looking for it on my Apple TV didn’t yield any results. I’m one generation behind on Apple TV and so I think this is Apple’s way of telling me that I need to spend more money. We ended up watching it on my iPad.
  • Connect, the feature that allows you to follow artists like you would on Facebook or Twitter, is a joke. I honestly do not know why Apple even tried again. (They used to have a similar feature in iTunes called Ping.) They should consider showing the artist’s most recent activity on other social media platforms, but creating Apple Music’s own network is worthless. It didn’t work before. It won’t work now. Most artists have only posted once or twice to Apple Music Connect. And most of the time the posts are the same as the ones they put on Facebook. Artists post to where most of their fans are. Some artists have tens-of-millions of followers on other networks. The Apple Music Connect feature should simply be replaced by the ability to “Love” an artist the same way you “Love” a song or album. This way it can still inform Apple’s algorithms for suggesting music to you. Other than that, show their most recent tweet and be done with it.
  • Apple Music’s “For You” playlists are OK but nowhere near as good as Spotify’s. I remember when Spotify released their Discover Weekly Monday playlist feature. Each week you’d get a curated playlist (presumably curated by bots) based on what you’ve listened to recently. It was incredibly good. I constantly read tweets where people were surprised at how good this was. Almost every week I felt as if someone created a mixtape for me. I’ve never had that same reaction to Apple Music’s playlists. In fact, just writing this makes me want to switch back to Spotify.
  • Speaking of playlists. Spotify has huge collections of playlists based on mood, or if you’re exercising, or genre, or just random crazy things. Apple Music doesn’t have anything like this at all. It might suggest a “90s playlist” or a “new music” playlist. Other than that, the selections are thin. When a catalog of songs is as large as Spotify’s or Apple’s… the only way to surface new stuff to customers is to slice that information 1000 different ways and keep putting it in front of people.
  • Many of the artist’s pages on Apple Music seem lacking attention. I’m sure their creative team is working hard on making sure that every single artist has an excellent looking page. I think every artist, from old to new, deserves an amazing artist page with images, bio, etc. I hope they have a ton of people assigned to this. It is worth giving people the experience that every artist is important to Apple.
  • Creating a playlist in Apple Music in iTunes is very, very odd. I think it is due to the fact that Playlists end up in your Library – which, to me, feels like the old days of having your own collection of MP3s. Perhaps a younger crowd doesn’t even see this as an issue because they’ve never purchased, ripped, or downloaded MP3s. On Spotify this issue doesn’t exist because “Your Music” is just a collection you’ve created. Again, if iTunes is disbanded it would alleviate this weirdness I think.
  • Beats Radio is fun to pop into now and then but it hasn’t been something that I listen to regularly. I don’t have any specific suggestions for how this could change other than to say that I wish the best segments from these programs somehow popped up from time-to-time so that I’d see them.
  • Using Siri’s “What’s this song?” feature is fun. It makes it easy to find a song that you’ve heard, add it to your library for later, and be able to listen to it again.
  • However, Siri in general to interact with Apple Music is less than good. Siri, in general, as I’ve said before, is just bad at this point. So as Siri improves so will its ability to interact with Apple Music.
  • One last comparison to Spotify; Spotify has an “easter egg” that if you’re playing a Star Wars album the progress bar turns into a lightsaber. Silly? Yes. But it shows that they put time and effort into making the experience fun. I’m sure there are other similar features that I simply haven’t stumbled across. I don’t see any of that whimsy in Apple Music yet. Perhaps because they are still playing catch-up but I’d love to see some “fun” thrown into the app itself.

I believe Spotify is winning on many fronts right now save one; integration. Apple will always hold all of the keys to iOS and macOS. As a result they’ll always be tightly integrated with Siri, the devices and hardware, etc. But even at that disadvantage Spotify proves itself to love music, to have found many interesting ways to surface music you will like, and is easy to use. But I do believe that Apple Music still has a chance to catch up.

Whether you use Apple Music or Spotify you’re in for a treat. You can pay just a few dollars per month (far less than a single album used to cost) and play any song you want at any time. Or, have music constantly playing while you cook, clean, shower, drive, etc. There is no limit on usage. If you like music, subscribing to one of these makes so much economical sense.