RSS

Microservices News

These are the news items I've curated in my monitoring of the API space that have some relevance to the microservices conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is defining not just their APIs, but their schema, and other moving parts of their API operations.

Testing and Meaningful Mocks in a Microservice System by Laura Medalia (@codergirl__) of Care/Of At @APIStrat In Nashville

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is “Testing and Meaningful Mocks in a Microservice System” by Laura Medalia (@codergirl__) of Care/Of on September 25th.

Here is Laura’s abstract for the session:

Laura will be talking about tooling for mocking microservice endpoints in a meaningful way using Open API specifications. She will cover how to set up microservice deployments processes so that with each versioned microservice deployed a mock of the service with up to date contracts will also be deployed. Laura will also show how to use tooling she and her team built to consume these lightweight mocks in unit tests and either get default mock responses or mock out custom responses for different test cases.

In an era where many API development groups are working to move to a design and mock first approach, this session will be key to our journey, and APIStrat is where we all need to be. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!

Photo Credit: Laura Medalia on Pintaram.


Microservice'ing Like a Unicorn With Envoy, Istio & Kubernetes With Christian Posta Of Red Hat At APIStrat In Nashville

We are getting closer to the 9th edition of APIStrat happening in Nashville, TN this September 24th through 26th. The schedule for the conference is up, along with the first lineup of keynote speakers, and my drumbeat of stories about the event continues here on the blog. Next up in our session lineup is “Microservice’ing Like a Unicorn With Envoy, Istio and Kubernetes” by Christian Posta (@christianposta), Red Hat @RedHat on September 25th.

Here is Christian’s abstract for the session:

The exciting parts of APIs, unfortunately, happen when services actually try communicating and working together to accomplish some business function. The service-mesh approach has emerged to help make service communication boring. In particular, a project named Istio.io has garnered attention in the open-source community as a way of implementing the service mesh capabilities. These capabilities include pushing application-networking concerns down into the infrastructure: things like retries, load balancing, timeouts, deadlines, circuit breaking, mutual TLS, service discovery, distributed tracing and others.

As Istio becomes more popular and widely used, we’re going to see a lot of people put it into production for their API use cases. This talk will walk attendees through the Istio architecture, and more importantly, help them understand how it all works.

API delivery, integration, orchestration, and development of mesh networks all represent the next generation of doing APIs, and APIStrat is where we are having these discussions. You can register for the event here, and there are still sponsorship opportunities available. Don’t miss out on APIStrat this year–it is going to be a good time in Nashville as we continue the conversation we started back in 2012 with the initial edition of the API industry event in New York City.

I am looking forward to seeing you all in Nashville next month!


Concerns Around Managing Many Microservice Repositories And Going With A Mono Repo

About half of the teams I work with on microservices strategy are beginning to freak out about the number repositories they have, and someone is regularly bringing up the subject of having a mono repo. Which is usually a sign for me that a group is not ready for doing the hard work involved with microservices, but also shows a lack of ability to think, act, and respond to things in a distributed way. It can be a challenge to manage many different repositories, but with a decoupled awareness of the sprawl that can exist, and some adjustments and aggregation to your strategy it can be doable, even for a small team.

The most import part of sticking to multiple repositories is for the sake of the code. Keeping services decoupled in reality, not just name is extremely important. Allowing the code behind each service to have its own repository, and build a pipeline that keeps things more efficient, nimble, and fast. Each service you layer into a mono repo will be one more chunk of time needed when it comes to builds, and understanding what is going on with the codebase. I know there are a growing number of strategies for managing mono repos efficiently, but it is something that will begin to work against your overall decoupling efforts, and you are better off having a distributed strategy in place, because code is only the first place you’ll have to battle centralization, in the name of a more distributed reality.

Github, Gitlab, and Bitbucket all have an API, which makes all of your repositories accessible in a programmatic way. If you are building microservices, and working towards a distributed way of doing things, it seems like you should be using APIs to aggregate and automate your reality. It is pretty easy to setup an organization for each grouping of microservices, and setup a single master or control repository where you can aggregate information, and activity across all repository–using Github Pages (or other static implementation) as a central dashboard, and command center for managing and containing microservice sprawl. Your repository structure should reflect your wider microservices organization strategy, and all the moving parts should be allowed to operated in a distribution fashion, not just the code, but also the conversation, support, and other essential elements.

I’m spending more time learning about Kubernetes, and studying how microservices are being orchestrated. Like other aspects of the API world, I’m going to focus on not just the code, but also the other communications, support, dependencies, security, testing, and critical building blocks of delivering APIs. I feel like many folks Im talking with are getting hung up on the distributed nature of everything else, while trying to distribute and decouple their code base. Microservices are definitely not easy to do, and decoupling isn’t an automatic solution to all our problems. From what I am seeing, it is opening up more problems than it is solving in some of the organizations I am working with, and causing a lot of anxiety about the scope of what teams will have to tackle when trying to find success with microservices across their increasingly distributed organizations.


Concerns Around Managing Many Microservice Repositories And Going With A Mono Repo

404: Not Found


How Should We Be Organizing All Of Our Microservices?

A question I get regularly in my API workshops is, “how should we be organizing all of our microservices?” To which I always recommend they tune into what the API Academy team is up to, and then I dance around give a long winded answer about how hard it is for me to answer that. I think in response, I’m going to start asking for a complete org chart for their operations, list of all their database schema, and a list of all their clients and the industries they are operating in. It will still be a journey for them, or me to answer that question, but maybe this response will help them understand the scope of what they are asking.

I wish I could provide simple answers for folks when it came to how they should be naming, grouping, and organizing their microservices. I just don’t have enough knowledge about their organization, clients, and the domains in which they operate to provide a simple answer. It is another one of those API journeys an organization will have to embark on, and find their own way forward. It would take so much time for me to get to know an organization, its culture, resources, and how they are being put to use, I hesitate to even provide any advice, short of pointing them to what the API Academy team publishes books, and provides talks on. They are the only guidance I know that goes beyond the hyped of definition of microservices, and actually gets at the root of how you do it within specific domains, and tackle the cultural side of the conversation.

A significant portion of my workshops lately have been about helping groups think about delivering services using a consistent API lifecycle, and showing them the potential for API governance if they can achieve this consistency. Clearly I need to back up a bit, and address some of the prep work involved with making sure they have an organizational chart, all of the schema they can possibly bring to the table, existing architecture and services in play, as well as much detail on the clients, industries, and domains in which they operate. Most of my workshops I’m going in blind, not knowing who will all be there, but I think I need a section dedicated to the business side of doing microservices, before I ever begin talking about the technical details of delivering microservices, otherwise I will keep getting questions like this that I can’t answer.

Another area that is picking up momentum for me in these discussions is a symptom of of the lack of API discovery, and directly related to the other areas I just mentioned. You need to be able to deliver APIs along a lifecycle, but more importantly you need to be able to find the services, schema, and people behind them, as well as coherently speak to who will be consuming them. Without a comprehensive discovery, and the ability to understand all of these dependencies, organizations will never be able to find the success they desire with microservices. They won’t be any better than the monolithic way many organizations have been doing things to date, it will just be much more distributed complexity, which will achieve the same results as the monolithic systems that are in place today.


Using Jekyll To Look At Microservice API Definitions In Aggregate

I’ve been evaluating microservices at scale, using their OpenAPI definitions to provide API design guidance to each team using Github. I just finished a round of work where I took 20 microservices, evaluated each OpenAPI definition individually, and made design suggestions via an updated OpenAPI definition that I submitted as a Github issue. I was going to submit as a pull request, but I really want the API design guidance to be a suggestions, as well as a learning experience, so I didn’t want them to feel like they had to merge all my suggestions.

After going through all 20 OpenAPI definitions, I took all of them and dumped them into a single local repository where I was running Jekyll via localhost. Then using Liquid I created a handful of web pages for looking at the APIs across all microservices in a single page–publishing two separate reports:

  • API Paths - An alphabetical list of the APIs for all 20 services, listing the paths, verbs, parameters, and headers in a single bulleted list for evaluation.
  • API Schema - An alphabetical list of the APIs for all 20 services, listing the schema definitions, properties, descriptions, and types in a single bulleted list for evaluation.

While each service lives in its own Github repository, and operates independently of each other, I wanted to see them all together, to be able to evaluate their purpose and design patterns from the 100K level. I wanted to see if there was any redundancy in purpose, and if any services overlapped in functionality, as well as trying to see the potential when all API resources were laid out on a single page. It can be hard to see the forest for the trees when working with microservices, and publishing everything as a single document helps flatten things out, and is the equivalent of climbing to the top of the local mountain to get a better view of things.

Jekyll makes this process pretty easy to do by dumping all OpenAPI definitions into a single collection, which I can then navigate easily using Liquid on a single HTML page. Providing a comprehensive report of the functionality across all microservices. I am calling this my microservices monolith report, as it kind of reassembles all the individual microservices into a single view, letting me see the collective functionality available across the project. I’m currently evaluating these service from an API governance perspective, but at a later date I will be taking a look from the API integration perspective, and trying to understand if the services will work in concert to help deliver on a specific application objective.


Looking At 20 Microservices In Concert

I checked out the Github repositories for twenty microservices of one of my clients recently, looking understand what is being accomplished across all these services as they work independently to accomplish a single collective objective. I’m being contracted with to help come in blindly and provide feedback on the design of the APIs being exposed across services, and help provide guidance on their API lifecycle, as well as eventually API governance when things have matured to that level. Right now we are addressing pretty fundamental definition and design issues, but eventually we’ll hopefully graduate to the next level.

A complete and up to date README for each microservice is essential to understanding what is going on with a service, and a robust OpenAPI definition is critical to breaking down the details of what each API delivers. When you aren’t part of each service’s development team it can be difficult to understand what each service does, but with an up to date README and OpenAPI, you can get up to speed pretty quickly. If an service is well documented via its README, and the API is well designed, and the surface area is reflected in it’s OpenAPI, you can go from not knowing what a service does to, understanding its value within hopefully minutes, not hours.

When each service possesses an OpenAPI it becomes possible to evaluate what they deliver at scale. You can take all APIs, their paths, headers, parameters, and schema and out them in different ways so that you can begin to paint a picture of what they deliver in aggregate. Bringing all the disparate services back together to perform together in a sort of monolith concert, while still acknowledging they all do their own thing independently. Allowing us to look at how many different service can be used in concert to deliver a single application, or potentially a variety of application instances. Thinking critically about each independent service, but more importantly how they all work together.

I feel like many groups are still struggling with decomposing their monolithic systems into separate services, and while some are doing so in a domain-driven way, few are beginning to invest in understanding how they move forward with services in concert to deliver on application needs. Many of the groups I’m working with are so focused on decomposing and tearing down, they aren’t thinking too critically about how they will make all of this begin work together again. I see monolith systems working like a massive church organ which take a lot of maintenance, and require a single (or handful) of knowledgeable operators to play. Where microservices are much more like an orchestra, where every individual player has a role, but they play in concert, directed by a conductor. I feel like most groups I’m talking with are just beginning the process of hiring a conductor, and have a bunch of musicians roaming around–not quite ready to play any significant productions quite yet.


1000 LB Gorilla Monolith and Microservices

1000 LB Gorilla Monolith and Microservices


A README For Your Microservice Github Repository

I have several projects right now that are needed a baseline for what is expected of microservices developers when it comes to the README for their Github repository. Each microservice should be a self-contained entity, with everything needed to operate the service within a single Github repository. Making the README the front door for the service, and something that anyone engaging with a service will depend on to help them understand what the service does, and where to get at anything needed to operate the service.

Here is a general outline of the elements that should be present in a README for each microservice, providing as much of an overview as possible for each service:

  • Title - A concise title for the service that fits the pattern identified and in use across all services.
  • Description - Less than 500 words that describe what a service delivers, providing an informative, descriptive, and comprehensive overview of the value a service brings to the table.
  • Documentation - Links to any documentation for the service including any machine readable definitions like an OpenAPI definition or Postman Collection, as well as any human readable documentation generated from definitions, or hand crafted and published as part of the repository.
  • Requirements - An outline of what other services, tooling, and libraries needed to make a service operate, providing a complete list of EVERYTHING required to work properly.
  • Setup - A step by step outline from start to finish of what is needed to setup and operate a service, providing as much detail as you possibly for any new user to be able to get up and running with a service.
  • Testing - Providing details and instructions for mocking, monitoring, and testing a service, including any services or tools used, as well as links or reports that are part of active testing for a service.
  • Configuration - An outline of all configuration and environmental variables that can be adjusted or customized as part of service operations, including as much detail on default values, or options that would produce different known results for a service.
  • Road Map - An outline broken into three groups, 1) planned additions, 2) current issues, 3) change log. Providing a simple, descriptive outline of the road map for a service with links to any Github issues that support what the plan is for a service.
  • Discussion - A list of relevant discussions regarding a service with title, details, and any links to relevant Github issues, blog posts, or other updates that tell a story behind the work that has gone into a service.
  • Owner - The name, title, email, phone, or other relevant contact information for the owner, or owners of a service providing anyone with the information they need to reach out to person who is responsible for a service.

That represent ten things that each service should contain in the README, providing what is needed for ANYONE to understand what a service does. At any point in time someone should be able to land on the README for a service and be able to understand what is happening without having to reach out to the owner. This is essential for delivering individual services, but also delivery of service at scale across tens, or hundreds of services. If you want to know what a service does, or what the team behind the service is thinking you just have to READ the README to get EVERYTHING you need.

It is important to think outside your bubble when crafting and maintaining a README for each microservice. If it is not up to date, or lacking relevant details, it means the service will potentially be out of sync with other services, and introduce problems into the ecosystem. The README is a simple, yet crucial aspect of delivering services, and it should be something any service stakeholder can read and understand without asking questions. Every service owner should be stepping up to the plate and owning this aspect of their service development, and professionally owning this representation of what their service delivers. In a microservices world each service doesn’t hide in the shadows, it puts it best foot forward and proudly articulates the value it delivers or it should be deprecated and go away.


Github Is Your Asynchronous Microservice Showroom

When you spend time looking at a lot of microservices, across many different organizations, you really begin to get a feel for the ones who have owners / stewards that are thinking about the bigger picture. When people are just focused on what the service does, and not actually how the service will be used, the Github repos tend to be cryptic, out of sync, and don’t really tell a story about what is happening. Github is often just seen as a vehicle for the code to participate in a pipeline, and not about speaking to the rest of the humans and systems involved in the overall microservices concert that is occurring.

Github Is Your Showroom Each microservices is self-contained within a Github repository, making it the showcase for the service. Remember, the service isn’t just the code, and other artifacts buried away in the folders for nobody to understand, unless you understand how to operate the service or continuously deploy the code. It is a service. The service is part of a larger suite of services, and is meant to be understood and reusable by other human beings in the future, potentially long after you are gone, and aren’t present to give a 15 minute presentation in a meeting. Github is your asynchronous microservices showroom, where ANYONE should be able to land, and understand what is happening.

README Is Your Menu The README is the centerpiece of your showroom. ANYONE should be able to land on the README for your service, and immediately get up to speed on what a service does, and where it is in its lifecycle. README should not be written for other developers, it should be written for other humans. It should have a title, and short, concise, plain language description of what a service does, as well as any other relevant details about what a service delivers. The README for each service should be a snapshot of a service at any point in time, demonstrating what it delivers currently, what the road map contains, and the entire history of a service throughout its lifecycle. Every artifact, documentation, and relevant element of a service should be available, and linked to via the README for a service.

An OpenAPI Contact Your OpenAPI (fka Swagger) file is the central contract for your service, and the latest version(s) should be linked to prominently from the README for your service. This JSON or YAML definition isn’t just some output as exhaust from your code, it is the contract that defines the inputs and outputs of your service. This contract will be used to generate code, mocks, sample data, tests, documentation, and drive API governance and orchestration across not just your individual service, but potentially hundreds or thousands of services. An update to date, static representation of your services API contract should always be prominently featured off the README for a service, and ideally located in a consistent folder across services, as services architects, designers, coaches, and governance people will potentially be looking at many different OpenAPI definitions at any given moment.

Issues Dirven Conversation Github Issues aren’t ust for issues. It is a robust, taggable, conversational thread around each individual service. A github issues should be the self-contained conversation that occurs throughout the lifecycle of a service, providing details of what has happened, what the current issues are, and what the future road map will hold. Github Isuses are designed to help organize a robust amount of conversational threads, and allow for exploration and organization using tags, allowing any human to get up to speed quickly on what is going on. Github issues should be an active journal for the work being done on a service, through a monologue by the service owner / steward, and act as the feedback loop with ALL OTHER stakeholders around a service. Github issues for each service will be be how the antrhopolgists decipher the work you did long after you are gone, and should articulate the entire life of each individual service.

Services Working In Concert Each microservice is a self-contained unit of value in a larger orchestra. Each microservice should do one thing and do it well. The state of each services Github repository, README, OpenAPI contract, and the feedback loop around it will impact the overall production. While a service may be delivered to meet a specific application need in its early moments, the README, OpenAPI contract, and feedback loop should attract and speak to any potential future application. A service should be able to reused, and remixed by any application developer building internal, partner, or some day public applications. Not everyone landing on your README will have been in the meetings where you presented your service. Github is your service’s showroom, and where you will be making your first, and ongoing impression on other developers, as well as executives who are poking around.

Leading Through API Governance Your Github repo, README, and OpenAPI contract is being used by overall microservices governance operations to understand how you are designing your services, crafting your schema, and delivering your service. Without an OpenAPI, and README, your service does nothing in the context of API governance, and feeding into the bigger picture, and helping define overall governance. Governance isn’t scripture coming off the mountain and telling you how to operate, it is gathered, extracted, and organized from existing best practices, and leadership across teams. Sure, we bring in outside leadership to help round off the governance guidance, but without a README, OpenAPI, and active feedback loop around each service, your service isn’t participating in the governance lifecycle. It is just an island, doing one thing, and nobody will ever know if it is doing it well.

Making Your Github Presence Count Hopefully this post helps you see your own microservice Github repository through an external lens. Hopefully it will help you shift from Github being just about code, for coders, to something that is part of a larger conversation. If you care about doing microservices well, and you care about the role your service will play in the larger application production, you should be investing in your Github repository being the showroom for your service. Remember, this is a service. Not in a technical sense, but in a business sense. Think about what good service is to you. Think about the services you use as a human being each day. How you like being treated, and how informed you like to be about the service details, pricing, and benefits. Now, visit the Github repositories for your services, think about the people who who will be consuming them in their applications. Does it meet your expectations for a quality level of service? Will someone brand new land on your repo and understand what your service does? Does your microservice do one thing, and does it do it well?


Serverless, Like Microservices Is About Understanding Our Dependencies And Constraints

I am not buying into all the hype around the recent serverless evolution in compute. However, like most trends I study, I am seeing some interesting aspects of how people are doing it, and some positive outcomes for teams who are doing it sensibly. I am not going all in on serverless when it comes to deploying or integrating with APIs, but I am using it for some projects, when it makes sense. I find AWS Lambda to be a great way to get in between the AWS API Gateway and backend resources, in order to conduct a little transformation. Keeping serverless as just one of many tools in my API deployment and integration toolbox, yet never going overboard with any single solution.

I put serverless into the same section of my toolbox as I do microservices. Serverless is all about decoupling how I deploy my APIs, helping me keep things doing small, meaningful things behind each of my API paths. Serverless forces me to think through how I am decoupling my backend, and pushes me to identify dependencies, acknowledge constraints, and get more creative in how I write code in this environment. To do one thing, and do it well, I need an intimate understanding of where my data and other backend resources are, and how I can distill a unit of compute down to the smallest unit as I possibly can. Identifying, reducing, and being honest about what my dependencies are is essential to serverless working or not working for me.

The challenge I’m having with serverless at the moment is that it can be easy to ignore many of the benefits I get from dependencies, while just assuming all dependencies are bad. Specifically around the frameworks I deploy as part of my API solutions. When coupled with AWS API Gateway this isn’t much of a problem, but all by itself, AWS Lambda doesn’t have all fo the other HTTP, transformation, header, request, and response goodness I enjoy from the API frameworks I’ve historically depend on. Showing me that understanding our dependencies is important, both good and bad. Not all dependencies are bad, as long as I am aware of them, have a knowledge of what they bring to the table, and it is a stable, healthy, and regularly evaluated relationships. This is a challenge I also come across regularly in microservices as well as serverless journeys, that we just throw things out for the sake of size, without thinking deeply about why we are doing something, and whether or not it is worth it.

I can see plenty of scenarios where I will be thinking that a serverless approach is good, but after careful evaluation of dependencies and constraints, I abandon this path as an option. I’m guessing we’ll see a goldilocks style of serverless emerge that helps us find this sweet spot. Similar to what we are finding with the microservices evolution, in that micro isn’t always about small–it is often just about finding the sweet spot between big and small, hot and cold, dependent and independent. Being sensible about how we use the tools in our API toolbox, and realizing most of these approaches are more about the journey, than they are about the actual tool.


The Business of Running Government As A Microservices Platform

I recently wrote a response to a recent Department of Veterans Affairs RFI which contained a section about the business of operating government as a microservices platform.I know that many folks wouldn’t make it that far in the 10K word response, so I wanted to break it out into its own post. I feel pretty strongly about the potential of decoupling how we deliver technology across government, but for this to be successful we are also going to have to decouple the business and politics of it all as well. This post reflects my current research and thinking about the business of APIs in government, and is part of some ongoing work I am doing around API management, public data, and how we begin to think differently about how government engages with the public in a digital age.

There are many interpretations of what is a microservice, but for the purposes of this post, it is a simple set of APIs that meet one precise set of government services. The API definition, database, back-end code, management layer, documentation, support and all other essential elements are self-contained, and usually stored in a single Github, or Bitbucket repository, when delivering microservices. Each microservice possesses its own technical, business, and political contract, outline how the service will be delivered, managed, supported, communicated, and versioned. These contracts can be realized individually, or grouped together as a larger, aggregate contract that can be submitted, while still allowing each individual service within that contract to operate independently.

Decoupling The Business Of Delivering Government Digital Services The microservices approach isn’t just about the technical components. It is about making the business of delivering vital government services more modular, portable, and scalable. Something that will also decouple and shift the politics of delivering critical services to veterans. Breaking things down into much more manageable chunks that can move forward independently at the contract level. Helping both simplify, and streamline the deliver of services for both the provider, vendor, as well as any other stakeholders involved in the software lifecycle. Microservices isn’t just about decoupling the technology, it is about decoupling the business of delivering digital services:

  • Micro Procurement - One of the benefits of breaking down services into small chunks, is that the money needed to deliver the service can become much smaller, potentially allowing for a much smaller, more liquid and flowing procurement cycle. Each service has a micro definition of the monetization involved with the service, which can be aggregated by groups of services and projects.
  • Micro Payments - Payments for service deliver can be baked into the operations and life cycle of the service. API management excels at measuring how much a service is accessed, and testing, monitoring, logging, security, and other stops along the API life cycle can all be measured, and payments can be delivered depend on quality of service, as well as volume of service.

Amazon Web Services already has the model for defining, measuring, and billing for API consumption in this way. This is the bread and butter of the Amazon Web Services platform, and the cornerstone of what we know as the cloud. This approach to delivering, scaling, and ultimately billing or payment for the operation and consumption of resources, just needs to be realized within each agency, and the rest of the federal government. We have seen a shift in how government views the delivery and operation of technical resources using the cloud over the last five years, we just need to see the same shift for the business of APIs over the next five years.

Changing The Way Government Does Business API management is where you begin changing the way government does business. API management has been used for a decade to measure, limit, and quantify the value being exchanged at the API level. Now that API management has been baked into the cloud, we are starting to see the approach being scaled to deliver at a marketplace level. With over ten years of experience with delivering, quantifying, metering and billing at the API level, Amazon is the best example of this monetization approach in action, with two distinct ways of quantifying the business of APIs.

  • AWS Marketplace Metering Service - SaaS style billing model which provides a consumption monetization model in which customers are charged only for the number of resources they use–the best known cloud model.
  • AWS Contract Service - Billing customers in advance for the use of software, providing an entitlement monetization model in which customers pay in advance for a certain amount of usage, which could be used to deliver certain amount of storage per month for a year, or a certain amount of end-user licenses for some amount of time.

This provides a framework for thinking about how the business of microservices can be delivered. Within these buckets, AWS provides a handful of common dimensions for thinking through the nuts and bolts of these approaches, quantifying how APIs can be monetized, in nine distinct areas:

  • Users – One AWS customer can represent an organization with many internal users. Your SaaS application can meter for the number of users signed in or provisioned at a given hour. This category is appropriate for software in which a customer’s users connect to the software directly (for example, with customer-relationship management or business intelligence reporting).
  • Hosts – Any server, node, instance, endpoint, or other part of a computing system. This category is appropriate for software that monitors or scans many customer-owned instances (for example, with performance or security monitoring). Your application can meter for the number of hosts scanned or provisioned in a given hour.
  • Data – Storage or information, measured in MB, GB, or TB. This category is appropriate for software that manages stored data or processes data in batches. Your application can meter for the amount of data processed in a given hour or how much data is stored in a given hour.
  • Bandwidth – Your application can bill customers for an allocation of bandwidth that your application provides, measured in Mbps or Gbps. This category is appropriate for content distribution or network interfaces. Your application can meter for the amount of bandwidth provisioned for a given hour or the highest amount of bandwidth consumed in a given hour.
  • Request – Your application can bill customers for the number of requests they make. This category is appropriate for query-based or API-based solutions. Your application can meter for the number of requests made in a given hour.
  • Tiers – Your application can bill customers for a bundle of features or for providing a suite of dimensions below a certain threshold. This is sometimes referred to as a feature pack. For example, you can bundle multiple features into a single tier of service, such as up to 30 days of data retention, 100 GB of storage, and 50 users. Any usage below this threshold is assigned a lower price as the standard tier. Any usage above this threshold is charged a higher price as the professional tier. Tier is always represented as an amount of time within the tier. This category is appropriate for products with multiple dimensions or support components. Your application should meter for the current quantity of usage in the given tier. This could be a single metering record (1) for the currently selected tier or feature pack.
  • Units – Whereas each of the above is designed to be specific, the dimension of Unit is intended to be generic to permit greater flexibility in how you price your software. For example, an IoT product which integrates with device sensors can interpret dimension “Units” as “sensors”. Your application can also use units to make multiple dimensions available in a single product. For example, you could price by data and by hosts using Units as your dimension. With dimensions, any software product priced through the use of the Metering Service must specify either a single dimension or define up to eight dimensions, each with their own price.

These dimensions reflect the majority of API services being sold out there today, we don’t find ourselves in a rut with measuring value, like just paying per API call. Allowing government API plans to possess one or more dimensions, beyond any single use case.

  • Single Dimension - This is the simplest pricing option. Customers pay a single price per resource unit per hour, regardless of size or volume (for example, $0.014 per user per hour, or $0.070 per host per hour).
  • Multiple Dimensions – Use this pricing option for resources that vary by size or capacity. For example, for host monitoring, a different price could be set depending on the size of the host. Or, for user-based pricing, a different price could be set based on the type of user (admin, power user, and read-only user). Your service can be priced on up to eight dimensions. If you are using tier-based pricing, you should use one dimension for each tier.

This provides a business framework that government can provide for vendors and 3rd party developers, allowing them to operate their services within a variety of business models. Derived from many of the hard costs they face, and providing additional volume based revenue, based upon how may API calls of any particular service receives.

Beyond this basic monetization framework, I’d also recommend adding in an incentive framework that would dovetail with the business models proposed, but then provide different pricing levels depending on how well the services perform, and deliver on the agreed upon API contract. There are a handful of bullets I’d consider here.

  • Design - How well does a service meet API design guidelines set forth in governance guidance.
  • Monitoring - Has a service consistently met its monitoring goals, delivering against an agreed upon service level agreement (SLA).
  • Testing - Beyond monitoring, are APIs meeting granular interface testing, along a regular testing & monitoring schedule.
  • Communication - Are service owners meeting expectations around communication around a service operations.
  • Support - Does a service meet required support metrics, making sure it is responsive and helpful.
  • Ratings - Provide a basic set of metrics, with accompanying ratings for each service.
  • Certification - Allowing service providers to get certified, receiving better access, revenue, and priority.

All of the incentive framework is defined and enforced via the API governance strategy for the platform. Making sure all microservices, and their owners meet a base set of expectations. When you take the results and apply weekly, monthly, and quarterly against the business framework, you can quickly begin to see some different pricing levels, and revenue opportunities around all microservices emerge. You deliver consistent, reliable, highly ranked microservices, you get paid higher percentages, enjoy greater access to resources, and prioritization in different ways via the platform–if you don’t, you get paid less, and operate fewer services.

This model is already visible on the AWS platform. All the pieces are there to make it happen for any platform, operating on top of the AWS platform. The marketplace, billing, and AWS API Gateway connection to API plans exists. When you combine the authentication and service composition available at the AWS API Gateway layer, with the IAM policy solutions available via AWS, an enterprise grade solution for delivering this model securely at scale, comes into focus.

Incentivizing Government API Vendors and Contractors Keep everything small, and well defined. Measure, reported upon, and priced using the cloud model, connecting to a clear set of API governance guidance and expectations. The following areas can support paying and incentivizing contractors based upon not just usage, but also meeting the API contract.

  • Management - API management puts all microservices into plans, then log, meter, and track on value exchanged at this level.
  • Marketplace - Turning the platform into a marketplace that can be occupied by a variety of internal, pattern, vendor, 3rd party, and public actors.
  • Monetization - Granular understanding of all the resources it takes to deliver each individual service, and understand the costs associated with operating at scale.
  • Plans - A wealth of API plans in place at the API gateway level, something that is tied to IAM policies, and in alignment with API governance expectations.
  • Governance - Providing a map, and supporting guidance around government platform API governance. Understanding, measuring, and enforcing consistency across the API lifecycle–platform wide.
  • Value Exchange - Using the cloud model, which is essentially the original API management, marketplace, and economy model. Not just measuring consumption, but used to maximize and generate revenue from the value exchanged across the platform.

When you operate APIs on AWS and Azure, the platform as a service layer can utilize and benefit from the underlying infrastructure as a service monetization framework. Meaning, you can use AWS’s business model for managing the measuring, paying, and incentivizing of microservice operators. All the gears are there, they just need to be set in motion to support the management of a government API marketplace platform.

I have been studying Amazon full time for almost eight years. I’ve been watching Azure play catch up for the last three years. I run my infrastructure, and a handful of clients on AWS. I understand the API landscape of both providers, and how they can be woven into this vision for the business of government APIs. I see the AWS API stack, and the Azure API stack, as a starter set of services that can be built upon to deliver a government microservices platform. All the components are there. It just need the first set of services to be defined, delivering the essential building blocks any platform needs–things like compute, storage, dns, messaging, etc. The progress to other more outward, application, and system integration services.

My objective with this approach is to enable government services to be delivered as individual, self-contained units, that can be used as part of a larger orchestration of government services. Open up government and letting some sunlight in. Think about what Amazon has been able to achieve by delivering its own internal operations as services, and remaking not just retail, but also touching almost every other industry with Amazon Web Services. The Amazon Web Services myth story provides a powerful and compelling narrative for any company, organizations, institution, or government agency to emulate.

My proposal is not meant to be a utopian vision for how government works. However it is meant to shine a light on existing ways of delivering services via the cloud, with APIs at the center. Helping guide each service in its own individual journey, while also serving the overall mission of the platform–to help the veteran be successful in their own personal journey.


Your Microservices Effort Will Fail Because You Will Never Decouple Your Business

I’m regularly surprised by companies who are doing microservices which are failing to see the need the change organizational culture, and that microservices will be some magic voodoo to fix all their legacy technical debt. That simply decoupling and breaking down the technology, without any re-evaluation of the business and politics behind, will fix everything, and set the company, organizations, institution, or government agency on a more positive trajectory. In coming years, we will continue to hear stories about why microservices do not work, from endless waves or groups who were unable to do the hard work to decouple, and reorganize the operations behind the services they provide.

The monolith legacy systems I’m seeing targeted are widely seen as purely technology, which is why it is often labeled as technical debt. What is missing from this targeting and labeling is any acknowledgement of the people and decisions behind the monolith. The years of business, political, and cultural investment into the monolith. How will we every unwind, or properly address the monolith, if we do not see the organizational, human, and business aspects of why it exists in the first place? Are we talking about the business decisions that went into creating and perpetuating the monolith? It is highly likely we will be making some of the same decisions with microservices, which could end up being worse than when we made them with a single system. Distributed mess, is often more painful than consolidated mess.

I’m seeing endless waves of large organizations mandating that their teams invest in microservices, with no mandating for microteams, microbudgets, microdecisionmaking, or any of the other decoupling needed to make microservices truly work independently. I attach micro as a joke. I really don’t feel micro is the constant that needs applying when it comes to services, or the business and organizational mechanism behind them. However, it is the word du jour, and one that gets at some of the illnesses our organizations are facing in 2018. In reality, it is more about decoupling and decomposing the technology, business, and politics of our operations, into meaningful units that can be deployed, operate, and deprecated independent of each other.

My point is that your microservices effort will fail if you aren’t addressing the business side of the equation. If your microservices team(s) still exist within your legacy organizational structure, you really haven’t decoupled or decomposed anything. The old way of making decisions, dealing with budget impacts, will still reflect what happened with the previous monolith. Your technology will be independently operating, but still beholden to the same ways of deciding and funding what actually happens on the ground. The result will resemble having a entirely new motor, where you are running without lubricant, or possibly old, thick, expired lubricant that prevents your new motor from ever delivering at full capacity, and eventually breaking down in ways you have never imagined while operating your existing monolith.


Why I Like A Service Mindset Over A Resource Focus When It Comes To APIs

I am currently crafting a set of services as part of my Human Sevices Data API (API) work. The core set of services for organizations, locations, and services are grouped together as a single service, as this is what I was handed, but all the additional APIs I introduce will be bundled as separate set of individual services. Over the last couple of weeks I’ve introduced seven new services, with a handful more coming in the near future. I’m enjoying this way of focusing on services, over the legacy way that is very resource focused, as I feel like it lets me step back and look at the big picture.

When I was defining the core API for this work I was very centered on the resources I was making available (organization, locations, and services), but once I took on a service mindset I began to see a number of things I was missing. With each service I find myself thinking about the full life cycle, not just the APIs that deliver the service. I’m thinking about the easy ones like design, deployment, and management, but I’m also thinking about monitoring, testing, and security. Then I’m delivering documentation, support, communications, and thinking about my monetization strategy, and access plans. I’m not just doing this once, I am thinking about it in the context of each individual service, as well as across all of them, taking care of the business of the services I’m delivering, not just the technical.

While some folks I talk to look at some of this as repeat work across my projects, I just see them as common patterns, that I should be reusing, refining, and delivering in consistent ways. I’m thinking about delivering the technology in a consistent way, and the operational, but I’m beginning to think about education, training, and how I can help folks on the provider and consumer side of things learn how things are working. I’m not just doing the technical heavy lifting to deliver APIs and then walking away, I’m bundling each search with what is needed to be valuable and successful as an actual service, that is API driven from start to finish. The service is accessible via an API, but it is also delivered, managed, and supported using APIs–everything has an API.

The Human Services Data APIs (HSDA) I am delivering aren’t just a single API, or set of service. They are an open source set of services that I’m putting out there for others to adopt and deliver as part of their own operations. I don’t want these to just be plug and play APIs, and want them to be plug and play services that deliver the information people need to find vital services in their community. Thinking of my APIs as services, and breaking them up into independent microservices helps me address the technical, business, and politics of delivering the technical components cities and organizations are needing. I’ve been pushing the business and politics of APIs since I’ve started, and trying to doing things in as small pieces as I can since the beginning, but the microservices conversations I’ve been tuning into have helped me think beyond the tech, the size, and actually consider how I’m doing this to deliver services to humans–it just an interesting twist that my primary project is all about delivering human service microservices. ;-)


Your Microservices Effort Will Fail Just Like Your API And SOA Initiatives

You are full steam ahead with your microservices campaign. You’ve read Martin Fowlers blog post, and talked about the topic with your team for the last six months. After a couple pilot projects, you are diving in, and have started decoupling the monolith of systems that you depend on to operate your business each day. You have mapped out all the technical details of all code, and backend systems in play, and have targeted about 30% of existing systems for reworking using a microservices strategy. Yet, despite all your planning, your microservices effort will still fail just like your API efforts, and its predecessor the SOA initiative did.

Despite all your research, planning, and eye for the technical detail in about 7 months everything will begin to slow, and by month 10 you will begin to get very, very frustrated. You see, you haven’t included any of the human element in your planning. You have thought about the business, cultural, legal, financial, and other non-technical aspects of operating your systems. You’ve done a fine job of developing a strategy for decoupling your monolith database, but you haven’t invested any time in what it will take to educate and bring up to speed all the business users, support, QA, and other folks you will need to have up to speed to actually make all this work. You are so blinded by technological trends, you forget to spend time talking to the people.

You are feeling good right now because you have surrounded yourself with yes men. People who are in perfect alignment with what you are doing. They think like you, and have drank the same kool-aid. However, once you start implementing your grand strategy to make every smaller, more micro, you will begin to see friction. Sales folks will begin to see you as a threat as you fragment their landscape, and be seen as taking a piece of their action. Business users will begin to freak out, because while they had to battle with IT before to get access to resources, now they have deal with man many smaller teams, with even more unfriendly people to talk with–you just increased the scope of what they need to know. Many will see your work as just increasing complexity, and expanding the landscape of their work, in an already hectic world. As this perception increases they will find ways to throw a monkey wrench into your finely tuned plan.

Your SOA rollout failed because it was to heavy handed, dictated everything, giving vendors too strong of a voice, while also strengthening IT too much in the eyes of business groups. Your API efforts underestimated the fatigue management and businesses users experience from SOA, and your team never really believed in API anyways–they just thought it was a fad, trend, and pipe dream. Now your team is sold on it being all about offering up smaller services, and you are paying lip service to fact that “service” should be including more than just the technical, but you actually haven’t done the heavy lifting. You don’t have your training, support, documentation, QA, testing, discovery, terms of service, security and other critical aspects of each service in place, let alone the comprehensive view of how this works across the thousands of microservices you are planning. Let’s get real, your microservices plan will fail just like API, and SOA. You are just too in love with the technical, and willfully blind to the human side of all of this to ever make it work.

Note: If my writing is a little dark this week, here is a little explainer–don’t worry, things will back to normal at API Evangelist soon.


Some Microservice Thoughts Around My Human Services API Work

The Human Services Data API I have been working on is about defining a set of API paths for working with organizations, locations, and services that are delivering human services in cities around the world. As I’m working to evolve the OpenAPI for the Human Services Data API (HSDA), I’m constantly mindful of bloat, unnecessary expansion of features, and always working to separate things by concern. My thoughts have evolved this due to a hackathon I attended this week in San Francisco where a team at Optmizely worked to decouple an existing human services application from its backend and help teach it to speak Human Services Data Specification (HSDS)–allowing it to speak a common language around the services that us humans depend on daily.

As the hackathon team was decoupling the single page application (React) from the API backend (Firebase) I took the API calls behind and published to Github as two JSON files. One of the files was locations, which contained 217 human service locations in San Francisco, and metadata, which contained a handful of categories being used to organize and display the locations. In this situation, there is no notion of an organization, just 217 locations, offering human services across five categories. This legacy application, and forward engineering hackathon project was quickly becoming a microservices project, ironically it is a microservice project that was about delivering human services. ;-)

Looking at this unfolding project through a microservices lens, I needed to provide a single service. In the context of Link-SF, the original project, I needed to offer a service that would deliver 217 locations where people can find human services in the areas of food, housing, hygiene, medical, and technology via an web, or mobile application. To help me achieve my goals I began to step through each of the steps of the lifecycle of any self-contained microservices:

  • Github - Each of my services begins with a Github repository, so I created a new repo.
  • Definitions - I defined the service I wanted as an OpenAPI.
  • Design - I worked to keep the design of it as simple as I possibly can.
  • DNS - I relied on Github’s DNS for this service, but may setup my own subdomain.
  • Database - I published a YAML data file into the data folder for the Github repository, acting as the database for my service, leverage Github and Jekyll for helping me broker database connectivity.
  • Deployment - I deployed a simple static JSON API driven from the locations YAML store.
  • Management - I am using Github as the management platform for my API, helping me deploy and manage consumption of my service. I’m using the Github API as the programmatic layer for managing my service operations to help me continuously deploy and integrate with this service.
  • Portal - I am using Github pages as the portal for my service, providing both human and programmatic access to my service, making human service locations more available.
  • Documentation - I published static HTML, and Swagger UI documentation for the service.
  • Support - I have published a support page, and will be providing support via Github issues, Twitter, and email.
  • Communications - I have a blog published, and will be publishing the story of the service using the Jekyll CMS blogging solution.
  • Caching - The API, portal, and all aspects of the service I’ve launched is being cached by Github, riding on the backs of their Content Delivery Network (CDN).
  • Reliability - While Github has been known to have stability issues, it is still some of the most reliable way to host data and API driven services.
  • Encryption - I’m leveraging the Github DNS, and using their encryption in transit by default. I will be using my own subdomain and encryption in the future.
  • Security - I’m offloading platform security to Github. They have more resources than I do. I’m also looking to keep things more secure be keeping my services as static as possible.
  • Testing - I am setting up a series of monitors to ensure the service stays available, and delivering expected data and promised schema. I will be publishing a status update and history page.
  • Transparency - Everything involved with my service runs on Github, either as a public or private repository, with the entire lifecycle transparent publicly, or privately to whoever is given access to the repository using Github.
  • Observability - The entire service runs on Github. The entire lifecycle is observable in the Github history, and leverage their existing infrastructure for identity and access management (IAM), and continuous integration and deployment.
  • Discovery - There is an APIs.json available in the root of the project, providing a machine readable index of the schema, data, API, and other resources available via this service.
  • Client - I’ve published an HTML / JavaScript client on the home page of the project, with an editor for managing the data, which leverages the Github API for reading and writing data to the services repository.

I’m calling my new service, born out of the Link-SF legacy application Human Services Link. However, the only thing remaining of the Link-SF application is the data. I’ve forward engineered the schema, API, and web interface to all run on Github as a single, yet decoupled service. It is a reduction of the overall Human Services Data Specification (HSDS) schema, focusing in on just providing services at a handful of locations, reducing complexity and scope, to deliver a specific service–do one thing and do it well. I will be launching an Amazon EC2, and containerized version of this human services microservice, but I wanted to look at this one incarnation of it though the microservices lens. Sure, its not your classic microservices definition, but I think it holds up to a microservices test, it just makes some different architectural choices than the rest of the community might have made.

My human services microservice definition solves a single problem, for a specific type of end-user. It was originally geographically limited to San Francisco, but with with this evolution I’ve made sure there is a state/province field, as well as country, so that solution can be deployed for any city. I want the compute, database, DNS, API, portal, docs, and UI elements and admin tools to all be self contained for the delivery of a single microservice, but I wanted it all forkable, so you could launch this service over and over, in any city around the globe. There is still a lot of work to be done on this project before it is ready for prime time, but I’ve planted the seeds when it comes to delivering microservices in my world this way. I’m hoping it will be something I can’t ignore, and will keep pushing forward as part of my overall vision of how we can deliver sustainable API driven services.

I really like the static nature of this approach. I really like the forkability of this approach, and how you can use Github to separate out concerns by organization. I really like how I can offload much of the operation of my backend to Github, including security, CDN, and scalability / reliability. It is definitely not a pattern everyone should be following when doing microservices, but it does provide a static forkable pattern that can really help keep services very focused and self contained, but available to work in service of a bigger architectural picture. I’ll keep playing around with this approach to delivering human service microservices, and see what I can push forward and make stick. I’m hoping it can providing a low coast way for cash strapped municipalities to better deliver up to date information about human services to their entire population.


Decoupling The Business Of My Human Services Microservice

I’ve been looking at my human services API work through a microservices lens, triggered by the deployment of a reduced functionality version of the human services implementation I was working on this week. I’m thinking a lot about the technical side of decoupling services using APIs, but I want to also take a moment and think about the business side of decomposing services, while also making sure they are deployed in a way that meets both the provider and consumer side of the business equation. My human services microservice implementation is in the public service, which is a space where the business conversation often seems to disappear behind closed doors, but in reality needs to be front and center with each investment (commit) made into any service.

Let’s take a moment to look at the monetization strategy and operational plan for my human service microservice. Yes, a public data microservice should have a monetization strategy and plan for operating and remaining sustainable. The goals for this type of microservice will be radically different than it would be for a commercial microservice, but it should have one all the same.

  • Monetization - How am I evaluating the investment into this project alongside any value that is generated, which I can potentially capture or exchange some value for some money to keep going.
    • Acquisition - What did it take for me to acquire the data and skills necessary to make the deployment possible.
    • Development - What time was invested in setting up the platform, developing the schema, data, definitions, code, and visual elements.
    • Operations - What does it take to operate the service? Maintain it, answer questions, provide support, and other realities of providing an online service today.
    • Direct Value - What are the direct benefits of having this service available to people looking for human services, or organizations looking to provide human services.
    • Indirect Value - What are the indirect benefits of having this service available, like increased conversation around human services, or maybe traffic and awareness of the Open Referral organization.
    • Partners - What partnership opportunities are actively being sought out, or will be opened up by having this service available.
    • Reporting - What type of reporting is necessary to operate and monetize this service, from tracking page views to understanding who is integrated with the data, and consuming data via APIs, or possibly through continuous integration of the Github repository.
  • Plan - What is my plan for making this service available to the public, partners, or maybe internally use across my own projects.
    • Elements - My human services location API is designed to be publicly available, forkable, and reusable by anyone.
    • Limits - There are no limits on how each human services microservice can be used, or forked and reused. Ideally, any implementation provides attribution, and acknowledges the source of framework or data, but there really are no rules.
    • Metrics - I am measuring unique page views on each page, including of the machine readable YAML, JSON, and other formats. I’m also tracking on stars, forks, and commits for each service.
    • Commercial - Providing a clear track for commercial vendors to understand that the project needs their support, and can be improved upon, and evolved through their underwriting and support.

I need to have a coherent snapshot of what I’ve invested into each of my service. I need to have a base plan for how I will be executing the business side of this service–even if it just making something available for free. There are two dimensions to this conversation: 1) My view as the creator of this service 2) The view of folks who fork and implement as a service. Both dimensions should have a monetization snapshot, and a plan for executing within this business snapshot. I need all of this decoupled from any other service I am offering, but ideally they all use a common set of reusable patterns, just like the technical aspects of my microservices effort.

Just like needing the compute, database, DNS, and other technical layers to be stable and scalable across my microservices, I need the costs associated with these elements predictable and affordable. I need to know what it costs to define, design, deploy, manage, and deprecate my services. I need to know the best path forward for making them public, keeping them private, and being honest with commercial partners about the value that is being generated, both directly and indirectly. I need a way to report my costs, and revenue across hundreds or thousands of these services. I need to be able to scale this both horizontally across many services, but also vertically for single services which get deployed over and over, reused and continuously integrate wherever they are needed using Github.

I’ll keep applying this model across my human services project. I’m thinking that I’m going to be developing a whole buffet of human service microservices that run a 100% on Github like this, but I am also playing around with other varietals that run on other popular cloud platforms as well. I’m not one to subscribe to any particular API dogma, I’m just looking to explore what is possible, and do all of it as cheaply as I possibly scan, growing the number of projects I’m able to tackle. Of course, none of this would be possible without my partners funding my work, and helping me connect the technical and business aspects of doing API Evangelist.


Reducing Developers To A Transaction With APIs, Microservices, Serverless, Devops, and the Blockchain

A topic that keeps coming up in discussions with my partner in crime Audrey Watters (@audreywatters) about our podcast is around the future of labor in an API world. I have not written anything about this, which means I’m still in early stages of any research into this area, but it has come up in conversation, and reflected regularly in my monitoring of the API space, I need to begin working through my ideas in this area. A process that helps me better see what is coming down the API pipes, and fill the gaps in what I do not know.

Audrey has long joked about my API world using a simple phrase: “reducing everything to a transaction”. She says it mostly in jest, but other times I feel like she wields it as the Cassandra she channels. I actually bring up the phrase more than she does, because it is something I regularly find myself working in the service of as the API Evangelist. By taking a pro API stance I am actively working to reduce legacy business, institutional, and government processes down and breaking them down into a variety of individual tasks, or if you see things through a commercial lens, transactions.

Microservices A microservices philosophy is all about breaking down monoliths into small bite size chunks, so they can be transacted independently, scaled, evolved, and deprecated in isolation. Microservices should do one thing, and do it well (no backtalk). Microservices should do what it does as efficiently as possible, with as few dependencies as possible. Microservices are self-contained, self-sufficient, and have everything they need to get the job done under a single definition of a service (a real John Wayne of compute). And of course, everything has an API. Microservices aren’t just about decoupling the technology, they are are about decoupling the business, and the politics of doing business within SMB, SME, enterprises, institutions, and government agencies–the philosophy for reducing everything to a transaction.

Containers A microservice way of thinking about software that is born in the clouds, a bi-product of virtualization and API-ization of IT resources like storage and compute. In the last decade, as IT services moved from the basement of companies into the cloud, a new approach to delivering the compute, storage, and scalability needed to drive this new microservices way of doing business emerged that was called containers. In 2017 businesses are being containerized. The enterprise monolith is being reduced down to small transactions, putting the technology, business, and politics of each business transaction into a single container, for more efficient development, deployment, scaling, and management. Containers are the vehicle moving the microservices philosophy forward–the virtualized embodiment of reducing everything to a transaction.

Serverless Alongside a microservice way of life, driven by containerization, is another technological trend (undertow) called serverless. With the entire IT backend being virtualized in the cloud, the notion of the server is disappearing, lightening the load for developers in their quest for containerizing everything, turning the business landscape into microservices, than can be distilled down to a single, simple, executable, scalable function. Serverless is the codified conveyor belt of transactions rolling by each worker on the factory floor. Each slot on a containerized, serverless, microservices factory floor possessing a single script or function, allowing each transaction to be executed, and replicated allowing it to be applied over and over, scaled, and fixed as needed. Serverless is the big metal stamping station along a multidimensional digital factory assembly line.

DevOps Living in microservices land, with everything neatly in containers, being assembled, developed, and wrenched on by developers, you are increasingly given more (or less) control over the conveyor belt that rolls by you on the factory floor. As a transaction developer you are given the ability to change direction of your conveyor belt, speed things up, apply one or many metal stamp templates, and orchestrate as much, or as little of the transaction supply chain as you can keep up with (meritocracy 5.3.4). Some transaction developers will be closer to the title of architect, understanding larger portions of the transaction supply chain, while most will be specialized, applying one or a handful of transaction templates, with no training or awareness of the bigger picture, simply pulling the Devops knobs and levers within their reach.

Blockchain Another trend (undertow) that has been building for sometime, that I have managed to ignore as much as I can (until recently) is the blockchain. Blockchain and the emergence of API driven smart contracts has brought the technology front and center for me, making it something i can ignore, as I see signs that each API transaction will soon be put in the blockchain. The blockchain appears to becoming the decentralized (ha!) and encrypted manifestation of what many of us has been calling the API contract for years. I am seeing movements from all the major cloud providers, and lesser known API providers to ensure that all transactions are put into the blockchain, providing a record of everything that flows through API pipes, and has been decoupled, containerized, rendered as serverless, and available for devops orchestration.

Ignorance of Labor I am not an expert in labor, unions, and markets. Hell, I still haven’t even finished my Marx and Engels Reader. But, I know enough to be able to see that us developers are fucking ourselves right now. Our quest to reduce everything to a transaction, decouple all the things, and containerize and render them serverless makes us the perfect tool(s) for some pretty dark working conditions. Sure, some of us will have the bigger picture, and make a decent living being architects. The rest of us will become digital assembly line workers, stamping, maintaining a handful of services that do one thing and do it well. We will be completely unaware of dependencies, or how things are orchestrated, barely able to stay afloat, pay the bills, leaving us thankful for any transactions sent our way.

Think of this frontline in terms of Amazon Mechanical Turk + API + Microservices + Containers + Serverless + Blockhain. There is a reason young developers make for good soldiers on this front line. Lack of awareness of history. Lack of awareness of labor. Makes great digital factory floor workers, stamping transactions for reuse elsewhere in the digital assembly line process. This model will fit well with current Silicon Valley culture. There will still be enough opportunity in this environment for architects and cybersecurity theater conductors to make money, exploit, and generate wealth. Without the defense of unions, government or institutions, us developers will find ourselves reduced to transactions, stamping out other transactions on the digital assembly line floor.

I know you think your savvy. I used to think this too. Then after having the rug pulled out from under me, and the game changed around me by business partners, investors, and other actors who were playing a game I’m not familiar with, I have become more critical. You can look around the landscape right now and see numerous ways in which power has set its sights on the web, and completely distorting any notion of the web being democratic, open, inclusive, or safe environment. Why do us developers think it will be any different wit us? Oh yeah, privilege.


Microservice Discovery Using Pivio

404: Not Found


Containerized Microservices Monitoring Driving API Infrastructure Visualizations

While I track on what is going on with visualizations generated from data, I haven’t seen much when it comes to API driven visualizations, or specifically visualization about API infrastructure, that is new and interesting. This week I came across an interesting example in a post from Netsil about mapping microservices so that you can monitor them. They are a pretty basic visualization of each database, API, and DNS element for your stack, but it does provide solid example of visualizing not just the deployment of database and API resources, but also DNS, and other protocols in your stack.

Netsil microservices visualization is focused on monitoring, but I can see this type of visualization also being applied to design, deployment, management, logging, testing, and any other stop along the API lifecycle. I can see API lifecycle visualization tooling like this becoming more common place, and play more of a role in making API infrastructure more observable. Visualizations are an important of the storytelling around API operations that moves things from just IT and dev team monitoring, making it more observable by all stakeholders.

I’m glad to see service providers moving the needle with helping visualize API infrastructure. I’d like to see more embeddable solutions deployed to Github emerge as part of API life cycle monitoring. I’d like to see what full life cycle solutions are possible when it comes to my partners like deployment visualizations from Tyk and Dreamfactory APIs, and management visualizations with 3Scale APIs, and monitoring and testing visualizations using Runscope. I’ll play around with pulling data from these provides, and publishing to Github as YAML, which I can then easily make available as JSON or CSV for use in some basic visualizations.

If you think about it, thee really should be a wealth of open source dashboard visualizations that could be embedded on any public or private Github repository, for every API service provider out there. API providers should be able to easily map out their API infrastructure, using any of the API service providers they are using already using to operate their APIs. Think of some of the embeddable API status pages we see out there already, and what Netsil is offering for mapping out infrastructure, but something for ever stop along the API life cycle, helping deliver visualizations of API infrastructure no matter which stop you find yourself at.


Making The Business Of Apis More Modular Before You Do The Tech

404: Not Found


Bringing The API Deployment Landscape Into Focus

I am finally getting the time to invest more into the rest of my API industry guides, which involves deep dives into core areas of my research like API definitions, design, and now deployment. The outline for my API deployment research has begun to come into focus and looks like it will rival my API management research in size.

With this release, I am looking to help onboard some of my less technical readers with API deployment. Not the technical details, but the big picture, so I wanted to start with some simple questions, to help prime the discussion around API development.

  • Where? - Where are APIs being deployed. On-premise, and in the clouds. Traditional website hosting, and even containerized and serverless API deployment.
  • How? - What technologies are being used to deploy APIs? From using spreadsheets, document and file stores, or the central database. Also thinking smaller with microservices, containes, and serverless.
  • Who? - Who will be doing the deployment? Of course, IT and developers groups will be leading the charge, but increasingly business users are leveraging new solutions to play a significant role in how APIs are deployed.

The Role Of API Definitions While not every deployment will be auto-generated using an API definition like OpenAPI, API definitions are increasingly playing a lead role as the contract that doesn’t just deploy an API, but sets the stage for API documentation, testing, monitoring, and a number of other stops along the API lifecycle. I want to make sure to point out in my API deployment research that API definitions aren’t just overlapping with deploying APIs, they are essential to connect API deployments with the rest of the API lifecycle.

Using Open Source Frameworks Early on in this research guide I am focusing on the most common way for developers to deploy an API, using an open source API framework. This is how I deploy my APIs, and there are an increasing number of open source API frameworks available out there, in a variety of programming languages. In this round I am taking the time to highlight at least six separate frameworks in the top programming languages where I am seeing sustained deployment of APIs using a framework. I don’t take a stance on any single API framework, but I do keep an eye on which ones are still active, and enjoying usag bey developers.

Deployment In The Cloud After frameworks, I am making sure to highlight some of the leading approaches to deploying APIs in the cloud, going beyond just a server and framework, and leveraging the next generation of API deployment service providers. I want to make sure that both developers and business users know that there are a growing number of service providers who are willing to assist with deployment, and with some of them, no coding is even necessary. While I still like hand-rolling my APIs using my peferred framework, when it comes to some simpler, more utility APIs, I prefer offloading the heavy lifting to a cloud service, and save me the time getting my hands dirty.

Essential Ingredients for Deployment Whether in the cloud, on-premise, or even on device and even the network, there are some essential ingredients to deploying APIs. In my API deployment guide I wanted to make sure and spend some time focusing on the essential ingredients every API provider will have to think about.

-Compute - The base ingredient for any API, providing the compute under the hood. Whether its baremetal, cloud instances, or serverless, you will need a consistent compute strategy to deploy APIs at any scale. -Storage - Next, I want to make sure my readers are thinking about a comprehensive storage strategy that spans all API operations, and hopefully multiple locations and providers. -DNS - Then I spend some time focusing on the frontline of API deployment–DNS. In todays online environment DNS is more than just addressing for APIs, it is also security. -Encryption - I also make sure encryption is baked in to all API deployment by default in both transit, and storage.

Some Of The Motivations Behind Deploying APIs In previous API deployment guides I usually just listed the services, tools, and other resources I had been aggregating as part of my monitoring of the API space. Slowly I have begun to organize these into a variety of buckets that help speak to many of the motivations I encounter when it comes to deploying APIs. While not a perfect way to look at API deployment, it helps me thinking about the many reasons people are deploying APIs, and craft a narrative, and provide a guide for others to follow, that is potentially aligned with their own motivations.

  • Geographic - Thinking about the increasing pressure to deploy APIs in specific geographic regions, leveraging the expansion of the leading cloud providers.
  • Virtualization - Considering the fact that not all APIs are meant for production and there is a lot to be learned when it comes to mocking and virtualizing APIs.
  • Data - Looking at the simplest of Create, Read, Update, and Delete (CRUD) APIs, and how data is being made more accessible by deploying APIs.
  • Database - Also looking at how APIs are beign deployed from relational, noSQL, and other data sources–providing the most common way for APIs to be deployed.
  • Spreadsheet - I wanted to make sure and not overlook the ability to deploy APIs directly from a spreadsheet making APIs are within reach of business users.
  • Search - Looking at how document and content stores are being indexed and made searchable, browsable, and accessible using APIs.
  • Scraping - Another often overlooked way of deploying an API, from the scraped content of other sites–an approach that is alive and well.
  • Proxy - Evolving beyond early gateways, using a proxy is still a valid way to deploy an API from existing services.
  • Rogue - I also wanted to think more about some of the rogue API deployments I’ve seen out there, where passionate developers reverse engineer mobile apps to deploy a rogue API.
  • Microservices - Microservices has provided an interesting motivation for deploying APIs–one that potentially can provide small, very useful and focused API deployments.
  • Containers - One of the evolutions in compute that has helped drive the microservices conversation is the containerization of everything, something that compliments the world of APis very well.
  • Serverless - Augmenting the microservices and container conversation, serverless is motivating many to think differently about how APIs are being deployed.
  • Real Time - Thinking briefly about real time approaches to APIs, something I will be expanding on in future releases, and thinking more about HTTP/2 and evented approaches to API deployment.
  • Devices - Considering how APis are beign deployed on device, when it comes to Internet of Things, industrial deployments, as well as even at the network level.
  • Marketplaces - Thinking about the role API marketplaces like Mashape (now RapidAPI) play in the decision to deploy APIs, and how other cloud providers like AWS, Google, and Azure will play in this discussion.
  • Webhooks - Thinking of API deployment as a two way street. Adding webhooks into the discussion and making sure we are thinking about how webhooks can alleviate the load on APIs, and push data and content to external locations.
  • Orchestration - Considering the impact of continous integration and deployment on API deploy specifically, and looking at it through the lens of the API lifecycle.

I feel like API deployment is still all over the place. The mandate for API management was much better articulated by API service providers like Mashery, 3Scale, and Apigee. Nobody has taken the lead when it came to API deployment. Service providers like DreamFactory and Restlet have kicked ass when it comes to not just API management, but making sure API deployment was also part of the puzzle. Newer API service providers like Tyk are also pusing the envelope, but I still don’t have the number of API deployment providers I’d like, when it comes to referring my readers. It isn’t a coincidence that DreamFactory, Restlet, and Tyk are API Evangelist partners, it is because they have the services I want to be able to recommend to my readers.

This is the first time I have felt like my API deployment research has been in any sort of focus. I carved this layer of my research of my API management research some years ago, but I really couldn’t articulate it very well beyond just open source frameworks, and the emerging cloud service providers. After I publish this edition of my API deployment guide I’m going to spend some time in the 17 areas of my research listed above. All these areas are heavily focused on API deployment, but I also think they are all worth looking at individually, so that I can better understand where they also intersect with other areas like management, testing, monitoring, security, and other stops along the API lifecycle.


Your Microservices Effort Will Fail If You Only Focus On The Technical Details

I have self-sensored stories about microservices, because I have felt the topic is as charged as linked data, REST, and some parts of the hypermedia discussion. Meaning there are many champions of the approach, who insist on telling me, and other folks how WRONG they are in the approach, as opposed to helping us work through our understanding of exactly what is microservices, and how to do well.

For me, when I come across tech layers that feel like this, they are something that is very tech saturated, with the usual cadre of tech bros leading the charge, often on behalf of a specific vendor, or specific set of vendor solutions. Even with this reality, I've read a number of very smart posts, and white papers on microservices in the last year, outlining various approaches to designing, engineering, and orchestrating your business, using the "microservices diet".

Much of what I read nails much of the technology in some very logical, and sensible ways--crafted by some people with mad skills when it comes to making sense of very large companies, and software ecosystems. Even with all of this lofty thinking, I'm seeing one common element missing in most of the microservices approaches I have digest--the human element.

I hear a lot of discussion about technical approaches to unwinding the bits and bytes of technical debt, but very little guidance for anyone on how to unwind the human aspect of all of this, acknowledging the years of business and political decisions that contributed to the technical debt. Its one thing to start breaking apart your databases, and systems, but its another thing to start decoupling how leadership invests in technology, the purchasing decisions that have been made, and the politics that surrounds these existing legacy systems.

I don't know about you, but every legacy system I've ever come across almost always has had a gatekeeper, an individual, or group of individuals who will fight you to the death to defend their budget, and what they know about tech (or do not know). I've encountered systems who have a dedicated budget, which only exists because the system is legacy, and with that gone, the money goes away too--sell me a microservices solution for this scenario!

Another dimension to this discussion, is that investors in microservice solutions are not interested in their money being used for this area. It just isn't sexy to be spending money on dealing with corporate politics, and unwinding years of piss poor business decisions, and educating and training the average business user around Internet technology. If you do not unwind these very human led, politically charged, business environments, you will never unwind the systems that exist within their tractor beams. Never. I'd care how much YOU believe.

In the end, I'm not trying to make you feel like you are going to fail. My goal is to encourage more investment in this area by the microservice pundits, vendors providing solutions the space, and VC's who are pouring money into these solutions. Many of the young, energetic folks at the helm of startups do not fully grasp the human side of corporate operations, and the potential quagmire that exists on the ground in front of them.

I am hoping that a handful of service providers out there can lower the rhetoric around their services and tooling, so that expectations get set at more realistic levels. Otherwise the push-back against the first couple failed waves of microservice implementations will become impenetrable, and blow any chance of making it work.


API Design, IoT, and Microservices Dominate The Talk Submissions For APIStrat Austin 2015

We closed the call for papers for APIStrat Austin 2015 last week, and have been processing the almost 100 talks that were submitted. We already had some session topics identified for the API conference this November 18th, 19th, and 20th in Austin, TX, but were also delighted to see the vast number of topics submitted through the call for papers process, resulting in this tag cloud.

This edition of APIStrat doesn't have a theme--we are just making about the roots of the conference, and looking to focus the conversation around the most pressing issues we face across the space at the end of 2015--I think this tag cloud reflects that conversation.

We have gone through all the talks, grouped them by topic, and will be reviewing them with the session chairs we have assigned to help organize and manage each track. By October, we will begin publishing the calendar, and sharing more detail about each session track, and the speakers in attendance.

While call for papers is closed, we have a handful of sponsorship opportunities left, so contact us if you want to help support the conversation at APIStrat. At the very least, make sure you are registered before the very intimate event sells out, you won't want to miss the conversations that occur.


Why You Are Not Doing Microservices Right!

Yeah, I know, it is a click-bait title. My goal is to not flame the haters, but educate everyone involved, both the people looking to learn about microservices, and those who are practicing microservices, so I'm hoping you will forgive me--my goal is not page views.

I'm sticking to my goal of not making microservices a regular thing I talk about, but I wanted to share some observations after listening to folks talk at Gluecon last week--a week that was full of microservices love (and hate). In the past, I mentioned that I received a handful of emails, and DMS from folks letting me know I'm not doing microservices, and after Gluecon I think I have a beginning list of why I think people generally are telling me I'm doing it wrong.

  • Tooling - People tend to identify with the tooling they use, and deeply associate the function with the tool. So if I don't use Puppet or Chef, I'm not orchestrating. If your not using Jenkins, your not doing continous delivery, and so on. This is dangerous for microservices storytelling, but also is something that will continue.
  • Vendor - This is probably the most egregious reason I see people telling others they are doing microservices wrong, and is somewhat akin to the tooling item, but is really just rooted in selling you something. You will never do it right, unless you are their customer, so don't worry about it.
  • Scope - You just aren't doing it at the scope someone else is doing it. Microservices is something large enterprises do, not small companies, or individuals. You have to be operating across many availability zones, many servers, etc. Horseshit. Microservices can be done at any scope, don't listen to them.
  • Completeness - You are only doing some of the aspects of microservices. Maybe you don't need to orchestrate across many operational zones, and your traffic looks very different than some of the usual suspects. Microservices is very seldom a complete picture, it is more about doing what works for you, cherry picking the valuable items you can apply in your world, not any complete definition.
  • Incomplete Info - Someone just read a single blog post you wrote, and hasn't been following the series, asked any questions, and just make assumptions about a single thing you wrote. While I feel you should make each post as complete as possible, the responsibility is on the reader to do their homework - Bwahahaha! yeah right, pundits will never do that.

So far these are the biggest reasons that I've extracted from people me telling me I am not doing it right. Ultimately it comes down to power and control, which I saw play out with SOA, REST, Linked Data, and other areas of "API". They either want to make you feel small, or not cool because you aren't using the tools they are using, or they just want you to buy their snake oil. 

In the end, for me, microservices are just a new storytelling package. They are mostly marketing, as many of the skeptics like to point out, and the companies who are truly doing microservices, will care about sharing their real-world stories, and work overtime to help you understand--there are a handful of them out there, like Runscope and Iron.io

For me, I will be focusing on the moving parts of microservices, which is why you'll see me talking about API design, deployment, management, testing, monitoring, security over just "microservices" umbrella. I could easily attack many of the pundits who challenge me, and say you aren't doing API management, evangelism, monetization, or other critical areas right, but why would I? Where does that get us?

As usual, if you have any questions, let me know how I can help. I'm happy to share, as I try to make sense of all of this. I'm adding some new research sites to support the conversation. 


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.