Modeling Value Streams using DevOptics Free Plan

Hot off the press: you can now model your value stream and track your deployment frequency using the Free Plan for CloudBees DevOptics. We released this new functionality as part of the ramp-up to next week’s DevOps World | Jenkins World San Francisco. There are two main new aspects: an improved, and free, value stream modeler, and extending the availability of deployment frequency available to the Free plan.

With the stream modeler you can capture how software flows through your system (or you can design how you want it to flow) by creating and connecting Gates. Then, if your flow is based on jobs running on Jenkins or CloudBees Core CI/CD masters, you can connect these gates to the jobs using the DevOptics Plugin and leverage the computational aspects of CloudBees DevOptics.

The connected masters need to be configured to a DevOptics account, either in a Free plan or in a Premium plan. Both plans provide CD platform monitoring (Run Insights) – the Free plan only provides 7 days of statistics while Premium extends that to 90 days. The Free plan now also provides Deployment Frequency metric for all gates, albeit also only for 7 days. The Premium plan adds two more Job-based metrics: Change Failure Rate and Mean Time to Recovery as well as Mean Lead Time. Mean Lead Time is based on Value propagation in the form of commits and tickets through your Value Stream, which is a key feature of DevOptics Premium.

You can get more details about all this from the CloudBees DevOptics User Guide, and you can try it out from!

Using the Value Stream modeler is straight-forward. Setting up the plugin to enable Deployment Frequency is pretty easy too but below I spell out the steps, starting right after my earlier post: Upgrading to the latest DevOptics

After connecting the plugin go to the Settings creen in DevOptics This is how it looks if you have configured only have Run Insights section of the Plugin:

And this is how it looks after connecting also the Value Streams section:

Now create your value stream:

Here is a very simple one-gate value stream

If we leave the gate unconfigured, it will look as this (note the wrench icon):

But, since we want to track Deployment Frequencies, we will now edit and configure the gate against one of the jobs in a master where a plugin has been installed. This is the configuration popup:

Save and you will see that the wrench goes away:

At this point we are ready! We just need to run some jobs on the master and data will flow, so…

We go to our Jenkins and run some jobs…

Now if we go back to the DevOptics app and look at the Value Streams screen we will see something like this:

On the Run Insights side, the graphs will initially look as below – note that the grey bars are gone and there is some empty space for the most immediate time interval:

But, if we wait a bit for the data to be propagated, we will see:

which is exactly what we expected.

So, there you go! Enjoy!

Day 1 building a SaaS

Here is my list of things to do ASAP when building a SaaS product (like DevOptics). What is yours?

Deployment Environments

Get a DEV (inside the firewall), STAGE (outside the firewall) and PROD (outside) set up.

Continuous Deployment

Arrange for your software flow so it will go from Repo commit to PROD automatically.

  • Implement zero-down time deployments from day 1, using blue-green, or k8s cut-overs

Feature Flags

Add Feature Flags (like Rollout) on Day One.  Stay in the trunk. Don’t keep code out of PROD.

Front-End Infrastructure

Set your front-end infrastructure for speed and decouple from back-end

  • We like GraphQL; others work
  • Arrange for Mock APIs
  • Set lightweight Front/Back separation
  • We like Cypress 

Instrument your System for Ops

Instrument on Day One using something like DataDog APM.  Reuse existing SaaS Services, don’t build your own.

Operate in Self-Contained Teams

A team should be able to build the back-end and the front-end.  Minimize interactions between teams. Get things to PROD.  PM and UX must be part of your team.

Keep Value Flowing

Don’t hog Value / Code in Open PRs.  Push functionality out, even with a Feature Flag.  Get Feedback. 

Iterate on Processes

That’s the core of “Agile”; operate with Agility and iterate

Add Connection with Customer

Use a SaaS service from Day One. There are multiple options out there.

Track Usage

Instrument your SaaS so you can track usage, front-end and back-end. Use something like Segment, Looker, Interana.

Upgrade your DevOptics – July 2019 Edition

DevOps World | Jenkins World SF is around the corner, so time for an update post to add to the collection of posts on DevOptics … et al.

Back in September 2018 I posted instructions on how to install your Jenkins distribution from scratch, and then how to install the DevOptics Plugins.  Let’s assume you had done that, so, picking up from there…

The first step would be to install the latest version of Jenkins.   Since I already had one, I just accepted the request from within Jenkins and installed the new version.  That got me from 2.121.3 to

Next was to upgrade the DevOptics Plugin

 By now I was running a fairly old version, I jumped from 1.1349 to 1.1739

The upgrade went pretty smoothly, and I restarted.  Then I went to look at the Connectivity status of the DevOptics Plugin.  Here, the latest Plugin is trying something different: it is asking me if I want to enable the Value Stream functionality.  Here is the screenshot:

The Value Stream functionality depends on additional plugins.  I can continue as-is, without installing the extra dependencies, or I can install the recommended plugins.  I chose the latter, and then I reinstalled.

I knew that my DevOptics subscription had changed recently, so, partly as an exercise, I disconnected the plugin and then reconnected.  This is the popup we get on disconnect

And this is the message we get on reconnect:

Cool.  Now I am back to where I was, except that now the new dependencies have been installed and my DevOptics plugin is fully connected.

At this point we are cooking and the Plugin should be connected to the back-end. Stay tuned for the next post in this series.

Work at Home Setup

I have been meaning to capture my work setup for a bit, so here is a quick note on it:

Physical Setup

Standing desk from Fully: a Jarvis bamboo adjustable.  I have a 48”x30” tabletop as I didn’t have space for a wider desk at the time but I would get a 60”x30” next time.  Desk is adjustable with electric motor w/ 4 presets. Totally worth the price.

Monitor is an old Apple 27 inch Thunderbolt.  I have two but I am currently running with a single one, with a tray for my laptop.  When I was using two monitors I had the right in landscape and the left in portrait with my laptop closed on a Kradl vertical stand, but I found I spent too much time moving windows around the screens and I like the cleaner setup of the current arrangement.   The monitor arms can handle the weight of the 2 monitors (23.5lb each), they are Chief Kontour K1D.

Keyboard (MS Natural Ergonomics 4000) and Mouse (Logitech) wired.  Cheap and reliable.

Chair from Ikea (Vagsberg).  Height is adjustable but tilt is not.  The dimensions works for me and its a very cheap chair.  I upgraded the castors with non-marking polyurethane wheels upgraded.

Dedicated “Spare” room, shared with pets (1 dog, 2 cats).


US West Coast.  Team ranges 9 TZs; I’m at the West side of the range.  I work from home close to 100% of the time, plus trips for meetings (elsewhere in the US, Europe).

Connectivity (local, running on ATT Uverse).  50Mb/5Mb. I wish I had Fiber but there is no fiber in our area and I like the service from Sonic.  NetGear Orbi tri-band (not really) “mesh” router.   WiFi is available throughout the house.

Social Setup

Kids are grown up and living on their own.  Most of my social interaction is with the team (video, Slack) and with neighborhood, which has plenty of dog walking folks and kids walking to/from school.

CloudBees DevOptics at DevOps World | Jenkins World 2018 (from CloudBees Blog)

This is an indirect link to the official post.

Last month I attended our annual DevOps World | Jenkins World 2018 in San Francisco. The event has gotten bigger every year, and – spoiler alert! – next year the conference will be at the San Francisco Moscone Center.

CloudBees DevOptics was an integral part of the event and this post will highlight new updates and features to the product. But first, I’ll give you a recap of what happened in San Francisco…

The rest of this HERE

It’s Friday Night – Do You Know What Your Code is Doing?

(this is a reprint from the post at from Sept 6th, 2017)

Marc Andreesen wrote back in 2011 that Software is Eating the World, and nowadays business execs everywhere, from GE to Ford to ABB, all are saying that their companies are really Software Companies.


Hold that thought…

Now look at a typical process for creating, testing, and releasing Software.  What you see is a multiplicity of individuals and groups collaborating to coordinate how ideas are converted into code that is tested, integrated, deployed, measured, and evolved, in faster and faster cycles.  And companies – all companies – are investing heavily to improve these processes to gain a competitive advantage, or just to remain competitive!


Next, peek at these Software Processes and you see automation and improved information flow and coordination as the key enablers.  And our friend Jenkins is one of a handful of tools that are Everywhere (in all companies) and Everywhere (in all places inside a company).

My employer, CloudBees, is the home for Enterprise Jenkins.  CloudBees Jenkins Enterprise and CloudBees Jenkins Team address running Jenkins in your Enterprise…



What is missing is how to connect together all these automation engines, and other agents in these Software Processes, so you can gain insight on and improve upon your Software Process…

So, today, CloudBees is announcing CloudBees DevOptics!

DevOptics Logo

CloudBees DevOptics connects all the islands of software development and provides you with a global view of your software process.   We, of course, connect to all your CloudBees Jenkins instances, but we will also connect to Open Source Jenkins instances; and more.


The screenshot above shows a very simple Software Process involving 3 Jenkins instances, used to create two Components (Plugins A and B) and then a combined Artifact that integrates those two components.

DevOptics collects the information from these 3 instances – they may be anywhere in your organization – and makes sense of the changes that are flowing through them.   The screenshot shows the tickets that have gone through the system – in this case filtered to the last 14 days, and shows where those changes are within these 3 “Gates”.  Some changes are still in the Components, others have gone through the integration – or perhaps are being tested right now.

As you can see, DevOptics can show you details about these changes.  We connect back to the defect tracking system – JIRA in this case – to get the current data on those tickets.  We also provide, but is not shown above, a connection back to the code repository (GitHub in this case), and, of course, to the automation engine itself, the Jenkins job.

In future blogs I will talk more about DevOptics.  In the meantime, you can check out today’s keynote announcement (video) and also the booth demo we are doing at JenkinsWorld (video).

VIDEO Screenshot HERE

Today is our announcement; we will launch later in the year.  Go and contact our friendly CloudBees Sales Team to engage with us. DevOptics is a SaaS product, so we will iterate quickly to add features and to accommodate feedback and new customer needs.

And remember, all Companies are Software Companies!  Regardless of the industry your company is in, you can benefit from CloudBees DevOptics! 🙂

Yes, we are taking over the world.  Go Team!


Jenkins World 2017

Its been a long run since the first Jenkins User Conference, back in October 2011, at the Marines’ Memorial Hotel and ClubJUC2011

The 2017 edition, now called Jenkins World, is at the end of this month, now at the much bigger SF Marriott Marquis Hotel, at 780 Mission Street.


Ping me if you are around, attending, or just want to chat.  At this point I’m planning to be there for the actual conference, on Wed and Thu; whether I’ll be there Mon or Tue, workshop days, will depend on the usual Gods of Software 🙂

See you around!

My Favorites from CES 2017

I spent a couple of days at CES 2017 last week.  Two days is tight for CES and I missed a few things that were in my to-visit list but it was still totally worth the trip.  A selection of my photos are in Flickr, but here are some highlights:

Most Guts – Prosthesis

IMG_9517 2.jpgThis is an exoskeleton designed for racing.  It has four mechanical limbs controlled by the racer within, in mostly prone position, one per leg and arm (I uploaded a video of the movement).  The project is now part of Furrion Robotics; the vision of the designer (Jonathan Tippet) is a racing league.  The current version was completed just before CES 2017 and still does not have the actuators on it but here is a 1/3 scale version of the left arm actuator:


It’s very impressive and feels properly nutty… in photos and even more in real life.  I created an album just for it.

Also check out the original web site, the Indegogo project, and the Gizmag/New Atlas article.

Geekiest Demo – Qualcomm Drive Data Platform

Qualcomm had a very impressive demo of their Drive Data Platform.  They use a Snapdragon 820Am with a camera, a fast (Cat 12) modem, and a neural network app using the Snapdragon Neural Processing Engine.   Here is the (simple) camera setup:


DDP uses the camera and SNPE to recognize the objects around the car in real-time, especially buildings and the such that bounce the GPS signals and limit the accuracy of the GPS.  Then DPP can filter out all but the direct signals from the satellite and it can determine the location of the car very accurately and quickly.   Here is a screenshot showing all the buildings that are being detected in real-time by DPP.


The (very accurate) location can then be used, again with computer vision, to determine the location of other elements in the field of  vision of the camera, like the lanes in the freeway.  This information can be shared with the map system.  And can be crowdsourced!

Repeat all this, and you can get very accurate maps very quickly.  I asked how large a sample they would need to do this and the presenter indicated that a large manufacturer would be able to do this by itself.   Think about the implications for the mapping ecosystem!

Here is the Qualcomm Blog Post on the DDP.  I’m sorry I didn’t take a video of the whole demo, but it was very impressive.  I think Qualcomm is doing a bunch of things right.


The main reason I went to CES this year was to check out all the work on AI / ML / Neural Networks.  I believe that in a couple of years we all will be using these things routinely in our apps.  Some of it will be at-the-edge, some will be on the cloud.  The DDP is an example of this.  The Snapdragon Neural Platform Engine (SNPE) was being demoed by itself elsewhere in the Qualcomm booth and it was very impressive.  Its designed so it can leverage the CPU, the GPU and the DSP on the Snapdragon, very neat and fast. Very interesting times ahead.

Coolest Area – Eureka Park

The first floor of the SandsExpo contains the Eureka Park.  I think this year, CES did a great job with it.  It had areas for different types of startups: early-stage, mid-stage, University-backed,  different non-US locations, Indiegogo, etc.  It was very busy, with some very interesting ideas and some not-so, chaotic and fun.  Quite different to the more organized floors elsewhere at CES.  Here is the map:


The area had a bunch of things.  Something that caught my attention was the BioMindR – they leverage wireless signals and machine learning to continuously sensor hydration, glucose and fluid levels without contacts. It reminded me, at a very different level, of the work at MIT that is behind Emerald.


There were many other interesting booths.  I particularly enjoyed talking with the Sensel folks but there were many more.  The Beon camera was also fun – I’m not convinced as a wearable in the wrist, but there should be a good fit for it somewhere.

Et Cetera…

I’ll leave you with some more pictures:

Can you figure out how does this work?  Click on the image to see the video clip.  The hint is: the demo was in the Nidec booth, they specialize on motors, bearings, robotic transporters, etc, etc.  Their motors are in theAutel Robotics Drones (very nice, they announced a deal with FLIR on dual thermal/visual cameras), and in many others.


And, from the very large and visited Xiaomi Mi booth, to show that they are serious, Mr. Hugo Barra:


Xiaomi was testing the waters on them coming to the US.  The prices were amazing.  I would not buy one of their phones but they had plenty of other things I’d consider purchasing, especially this electric foldable bicycle – listed at the booth for $430!


Ah, and the LG 4K OLED monitors were excellent.  I’m not a monitor guy but they were very very impressive!

Bluetooth 5.0 is beginning to show up and so is Thread.  Nordic’s nRF52840 is “ready” for both standards.  Its hard for me to predict the traction for Thread, but I expect we will see Bluetooth 5.0 everywhere soon.

Worth The Trip

Totally worth the trip to Las Vegas.  As in previous trips, the best thing are always the conversations with the booth guys.  Special thanks to the people from Qualcomm, Intel, Pikazo, Nordic, Beon, Nidec, Shenzhen Minew, Prosthesis, AppMyHome, Sensel, Orangie, RetailNext, Autel, ChargePoint and many more.

And check the Flickr album if you want to see more picts and additional commentaries.

NativeScript and Modern Sensors and Instruments


Modern applications are increasingly leveraging a multiplicity of sensors to connect the physical realm to the traditional software realm to gain multiple benefits, from efficiencies of operation, to security and safety, to new functionality.

Here are some real-live examples:

  • Sensors like RFID Tags allow Zara, the largest retail apparel in the world, to track their inventory accurately and in real-time, from the time it arrives in the trucks to the time it leaves with a customer, providing asset control, inventory availability, smart sale recommender, connection to social media, automatic reorders, and more.
  • An Industrial Equipment Vendor like TVH can track use data from GPS and ODBII sensors to track cars, their use, speed, acceleration and deceleration, gas consumption, battery status, and reduce operating costs from energy to maintenance to insurance.
  • Kingslake, a Progress customer in Sri Lanka, provides an application for managing the transportation of employees to factories using independent mini-bus operators. The employees use NFC tags to identify themselves to the operators which read them using rugged Android devices that can then use GPS data and connect to the backend to validate routes, provide accountability, even notify the factory if an employee won’t show up for their shift.
  • An event, like our own ProgressNEXT or the CSUMB Capstone Festival, can use a combination of RFID tags and readers and iBeacons, mediated through Mobile Applications, to provide personalized event information as well as tracking, logging and authorizing event data.

In the above examples, the server side services can be created on top of platforms like Rollbase, Modulus and the Telerik Platform, using data services like Progress Open Edge, and many SAAS services, and typically will interact with the clients through internet standards like HTTPS and Web Sockets.  The sensors themselves are part of, or interact with, mobile devices like smartphones and smart watches, and fixed devices like readers and instruments, like smart door locks, access control readers, etc.

There are a few key OS platforms for these devices.  The smallest sensors still run in “traditional” Real Time Operating Systems.  A number of handhelds and instruments used to run on Windows CE  but Microsoft has ended its life transitioning to Windows 10 IOT, but we many mobile devices are increasingly based on Android, while fixed devices are moving towards Linux – and Android Things was just announced.

NativeScript is particularly well-suited to this space.  NativeScript today runs well on iOS and Android (see note below) and provides very efficient and timely access to any platform features, including new libraries created to access the sensors.  The NativeScript metadata-driven machinery exposes these libraries at the JavaScript level and then they can be wrapped through a Plugin into a cleaner abstraction. This means that JavaScript and CSS developers can then write applications leveraging these sensors.

Note – Beyond iOS and Android, Progress/Telerik also has been doing work on the NativeScript port for Windows Universal (for Tablets and Desktops), and there is a separate project on porting it to Linux leveraging GitHub Electron Shell.

Here are some links to recent projects that are built on this setup:

  • The Invengo XC-1003 is an Android (KitKat)-based mobile RFID reader. The RFID reader is available to Android apps but Mehfuz wrote a NativeScript Plugin on top of it, and then a simple App using it.  Check out the blog post, the video and the GitHub  repo.  The App mimics a retail situation and is actually using 4 plugins: a RFID Reader, a iBeacon Reader (also written by Mehfuz), Google Maps, and SQLite for local storage, as well as checking with a (node.js-based) service for content.
  • An App for Fleet Management that uses the Bluetooth APIs to talk to an ODBII BLE sensors (like this one), and which can then present the car data, or push it to the cloud. Check out the blog post, video and GitHub repo.
  • We also have done several variations of an App to support Apps:
    • A simple Meetup App where RFID tags are attached to Badge Holders and then are used to track attendance and to run a raffle. Check the blog post.
    • Another, more complex version of the RFID setup was used at the ProgressNEXT 2016 event in Las Vegas. The ProgressNEXT application did more sophisticated server-side processing using the AMTech IOT  See the post
    • The latest version used both RFID tags (using the AMTech Platform) and iBeacons (via a plugin) and was used in the CSUMB Capstone festival.
  • Finally, the setup is identical to the one used by Kingslake in the employee transportation project mentioned at the beginning of this writeup. Kingslake is currently using a Native App for the client used by the Android smartphone, but it will be straight-forward to replace it with a more capable NativeScript solution.
  • Our TODO pile includes two other projects:
    • A Plugin for the Keonn AdvanReader 10, which is a flexible USB-connected RFID reader that can be used from an Android device. This would be very suitable for retail.
    • Running a NativeScript application directly inside a ThingMagic Sargas (blog, product page). This would require using the advanced built for NativeScript that is built on top of GitHub Electron Shell.

The future belongs to this class of applications: fully connected in real-time to our world, with substantial computational power – able to run the new AI algorithms, including Machine Learning – and connected to the internet and our enterprises assets.  NativeScript is an excellent tool to use in this environment.