As a system architect when you have to design an Enterprise grade M2M Telematics framework your first quest starts with what is the right toolset available for the job and what are the gaps to fill?
From the software standpoint, M2M is all about inter-connectivity, and being so, Lightweight and Interoperability becomes two vital factors for any successful implementation. Towards this goal one would have to even make some unconventional trade-offs, such as having to choose between strong encryption and long battery-life.
C/C++ is known for its low-overhead output, which made it the defacto standard for embedded systems and real-time frameworks, but would it be able to retain its top position for M2M also by providing an interoperable connectivity ?
One can easily implement a high-throughput network layer with ZeroMQ in C/C++ , but if it means being isolated from the mainstream networks, then one has to think twice. Or a high-secure MQTT realtime distributed network in C/C++, but if it drains your device battery, then your device may not have much story to tell after its decay.
In a traditional web client-server architectures a HTTP protocol may look inferior to, say MQTT or Websocket, due to its handshake requirement for every request afresh, but in most remote sensing applications, one can achieve equal amount of performance with HTTP batching the data points (and more importantly conserving the network costs and battery life). Try maintaining a websocket connection over wifi while going on an average speed train to understand the importance of disconnected protocols in remote sensing applications. And try doing this communication as peer to peer across two trains in opposite direction crossing each other.
In a nutshell, each protocol has its role to play in the stack somewhere. For example, MQTT for CAN based OTMR devices on board the train, HTTP for relaying batched metrics at the station while it is stationary, ZYRE for peer-to-peer on-track communications etc., which means your M2M framework has to provide support for all those required protocols in an inter-operable manner.
MQTT, Websockets, ZMQ etc. all have very good client side libraries for C/C++. What about HTTP REST?
Currently consuming or producing REST services from C++ can be dealt with in two ways:
The first method of hand-crafting gives more granular control, but
Specifications such as Swagger, RAML etc. try to address this problem. They allow REST functionality to be specified as a schema and tools to be built around them to auto-generate proxy/stub classes for that schema.
But how can one make this fit into C++?
Let us list our requirements:
For example, if vehicle is sending {Engine: "", Cylinders: ""}
, one should be able to add a derived json field {..., Make: Engine + Cylinders}
on the fly.
Modern C++14 features such as Variadic templates, IOD symbols help nicely in making this kind of class extension possible with static typing.
Consider a Vehicle REST service. car = {name:"", location:..., speed:...}
.
Imagine a polling service that keeps getting the car details from an M2M server to track its location and speed. The generated helper methods on the fields should automatically be able to identify the changes to the value of any field and raise change event for that object, (instead of user having to manually test all field values for every iteration manually), with support for "change threshold" (so that any changes below that threshold are not notified).
For the producer side:
While these above requirements are basic, they are not sufficient to build distributed enterprise mode C++ REST applications. Because they are missing below features and a good C++ REST middleware should strongly consider supporting them too.
The additional requirements to make the above compliant with enterprise mode application are:
Implementing changes without bringing down the system is one of the basic needs for any well-designed enterprise mode software. To support this in the C++ REST scenario, we cannot allow the generators to be external tools (that generate static code as part of the build process). Rather the generators will have to be part of the running system and they generate dynamic classes on the fly from the schema URI or schema string.
Imagine a server that is part of a large workflow, consuming the API from some external server S1, processing it and sending the processed data to some other external server S2.
What happens when you have to move away from S1 to some other competitor providing the same services, but with slightly different parameter configurations? You cannot bring down the system to generate a new copy of the code and integrate it - it defeats the whole purpose of having JSON right in the first place.
The system should be able to regenerate the internal JSON classes/objects on the fly. This requires the generator to support additional '/regenerate'
meta-functionality that remaps the whole class system to new schema.
Imagine a complex workflow system that is comprised of many REST services (microservices if you will). Tracking the flow of objects in such a system is vital for both auditing purposes and maintenance (and billing if any).
Each of the generated classes should be capable of keeping track of their own metrics and the wrapper/helper methods should be capable of reporting their status (to any configured performance monitoring/listener dashboards).
Keywords: GK, Blockchain Consultant, Artificial Intelligence, IOT, Open Source, CarMusTy, CFugue, C/C++ Music Library, Carnatic Music, Song, Notation, MIDI, Typesetting, PDF, Books, Maya, Visual Effects, DirectX, OpenGL, Simulation, Predictive Analytics, Big Data, M2M Telematics, Predictive Maintenance, Condition-based Maintenance, Research, Cryptography, Distributed Ledger, Mentor, CTO for Hire, Consulting CTO for MVP Building, CTO for Startups, CTO as a Service, Virtual CTO, CTO Advisory Services.