Yocto Fundamentals

Complex embedded and cyber-physical systems are getting more and more sophisticated, and perform increasingly complex tasks. In order to shorten their time to market, embedded Linux distributions are used as the base environment for newly developed products. While there is a big convenience in having the system set up quickly, it comes at the cost of the increased overall complexity of the solution. This means additional work for platform maintenance is required. Another level of complexity is added by the need to develop and maintain multiple product configurations that could cover systems of different scale, software and hardware architecture.

The Yocto Project addresses those issues by using layered architecture for providing support for various hardware architectures, boards, and software components. This approach not only allows to reuse existing software components but also makes them interchangeable, thus providing the building block for embedded systems developers and integrators.

Imagine a case where an embedded system manufacturer wants to build a custom product. In the traditional approach, engineers would have to prepare the bootloader, the Linux kernel and the root file system customized to match the requirements of each platform they intend to support. Moreover, application logic also needs to be integrated.

In the ideal world, the manufacturer would like to prepare one generic component that provides a solution to all specific problems. Such a component should be then reusable and reconfigurable across all base platforms of the manufacturer’s choice.

Here is where the Yocto Project brings value. With Yocto it is possible to provide an application to the integration team in a form of recipe contained in a product-specific layer. Such a layer can be added to any Yocto-based build system, usually without any additional work required to build embedded Linux system images for the target hardware platforms. This not only reduces the work required to bring up the system for the first time but also enables to switch the base platform pain-free while staying focused on the solution.

Having recipes prepared with care and the system architected as a set of independent and feature-related layers keeps the system clean and lets developers reuse software components. It is also possible to modify recipes that are provided by other layers without polluting them by using bitbake append file. The ability to use bitbake append files is a smart mechanism for modifying bitbake recipes provided by a 3rd party and keeping those modifications in a separate repository. That makes the management and maintenance of code, build configurations and build steps easy and efficient.

There is an obvious consequence of such a statement: the recipes have to be prepared with care and the system has to be architected as a set of independent and feature-related layers. It is also possible to modify recipes that are provided by other layers without polluting them by using bitbake append file.

OpenEmbedded, which is a well-known build framework for embedded Linux distributions, providing thousands of packages and Yocto base layers provide a foundation for a system being developed to be customized by other layers added on a top.

As its name already suggests, the Board Support Package (BSP) layer provides the support for a particular board or a set of boards. It is a place where configurations and customizations of the Linux kernel and bootloader should be located and any other software components that are required by the platform features. It should also contain any machine-specific tuning options necessary for the compiler.

The Distribution layer allows producing customized Linux distribution – a set of software packages that provide applications and libraries and the Linux kernel. It can be used to adjust more precisely all the low-level details like compile-time options, base files, package selection or branding. In many cases, the reference Poky distribution is enough, so this layer is optional customization.

The Application layer is a layer or a set of layers intended to provide application-specific features, most notably the application itself but also libraries, frameworks and other extra features that the developer needed to build the final solution. All those components come in generic form for portability.

An alternative to the Yocto Project is Buildroot. Both projects are able to generate fully functional embedded Linux distribution. What is the difference between them? Buildroot has a strong focus on simplicity, reuses existing, well-known technologies such as config and make. Yocto, on the other hand, requires knowledge about bitbake and performs builds in a sophisticated framework based on python and bash.

There is also a significant difference in output. The main artefact of Buildroot is a root filesystem, but also a toolchain, a kernel and a bootloader. On the other hand, the primary output of Yocto is a package feed that is then a foundation for the final root filesystem image. That means Buildroot is a better choice for single-configuration, monolithic systems with fixed package set, while with Yocto-based image one can work on package-level during development or even after the deployment. Similarly, extensible SDK (eSDK) provided by Yocto can be customized or updated. With such a development environment, developers are able to work on the application for the target system in a convenient way, without the need to rebuild the system image.

Now, not only development is easier, but also build times differ. Buildroot has no caching mechanism for reliable incremental builds in contrast to Yocto, which caches build results, so only modified packages are rebuilt. The difference is significant especially for complex systems in favour of Yocto build system. Moreover, cache files produced by one build can be shared via network and reused by multiple developers.

Configuration approach also differs between those two solutions. Buildroot stores configuration in a single file which defines system: bootloader, kernel, packages and CPU architecture. In Yocto configuration is divided into recipes and append files are provided by separate layers. The top-level configuration is controlled from a single configuration file that uses the aforementioned recipes as a building blocks.

The possibility of keeping specific configurations separate, e.g.: for distribution, machine architecture, BSP and root filesystem image allows to reuse it, e.g. build the same image for a various number of platforms. In Buildroot each platform requires separate monolithic configuration. That means a great part of it will be duplicated.

While the developers may find the Yocto Project to have a bit steeper learning curve, its well-defined project structure contributes to time savings, optimizing work, as well as most importantly it lends itself very well to the improved scalability. Both vertical – towards an increased complexity of embedded solutions, as well as horizontal – towards building approaches and platforms that support complex product families in a uniform manner. From the developer’s point of view, because Yocto allows to organize the code and the configuration in separate layers which can be stored in separate repositories, both the development as well as maintenance of projects becomes very structured and possible to organize very well. All those features make Yocto-based work environment effective, convenient, as well as well-suited to support very well all kinds of long-term developments. This reduces time to market and required effort especially in case of products with multiple configurations and base platforms.

Less talked about, but still great Rust features!

Rust is one of the most talked about languages in recent years, as the Firefox browser is gradually being rewritten to use it and more and more companies are starting to use it. While its most prominent feature – borrow checker – has been discussed numerous times, there are also many smaller features, which provide a comfortable experience while developing in Rust.

Convenient switching between toolkit versions
One of the most common issues that affect Rust programmers is that the most tempting language features are always available only in nightly. On the other hand, for professional development, it’s a good idea to keep the ecosystem quite stable, with one toolkit version that is expected to make our codebase work, and which will be supported for as long as possible. Is it possible to seamlessly switch between different releases of the Rust toolkit?

Enter rustup – the Rust toolchain installer. It’s the tool that allows not only for installing, but also updating and switching between different toolchains.

The main rustup website is rather minimalistic, but anyone in need of more information can find it in a quite extensive rustup project’s README.

Module system
Project’s source code rarely can be contained in a single source file. Different programming languages have different approaches to solving this problem – one of the most known in the embedded programming world is the C/C++ way – #include, which essentially causes the referenced file to be directly pasted in its entirety by the preprocessor.

This solution is simple to implement, easy to understand and usually works. There are pretty severe issues with this approach though: any changes made in one file can spill into all others which include it – directly or indirectly. It can be a rogue #define which changes the behaviour of the code or using a namespace directive.

There’s also a risk of including one file more than once, that would cause things to be defined twice – and every C/C++ developer has the include guard already in their muscle memory or editor templates. Rust solves the problem differently – by using a module system.

Modules in Rust can work inside of one file in a way similar to namespaces in C++, with the additional possibility of default privacy of module contents – anything that needs to be visible outside of the module needs an extra pub keyword.

But when a module itself is prepended with a pub, then it can be used in other source files, while being completely independent of them. More information can be found in The module System to Control Scope and Privacy chapter of the Rust Book.

Crates
Almost all software developed stands today on the shoulders of giants by using at least one third party library. Most modern programming languages provide an easy way for the code to depend on a given version third-party library, which can be automatically downloaded and built by the build system. Rust isn’t different in this matter – cargo, build system used by Rust, gets the current project information from Cargo.toml file, where dependencies on other libraries (called crates) and their versions can be also stored. Then, during the build process, crates are downloaded and built.

Crates can be browsed using Rust Package Registry, which also conveniently provides a snippet that can be pasted into the Cargo.toml to use the chosen crate.

As an interesting side note, cargo and crates are both references to Cargo Cult. Knowing current development practices of stitching libraries together until it works, it’s a pretty humorous touch.

Integrated testing
Software testing is one of the basic elements of modern code hygiene. There are testing frameworks for almost all programming languages, but Rust decided to be a step ahead and a simple test framework is provided with the language itself. Citing a simple example that can be found in the Writing tests chapter of the Rust Book:

#[cfg(test)]
mod tests {
#[test]
fn it_works() {
assert_eq!(2 + 2, 4);
}
}

In this example, module tests will be available only during the test execution, and the it_works function is marked as a test, therefore it will be run when the cargo test command is invoked.

But that’s not all – it’s possible not only to test the correctness of our program, but also to benchmark it. To do that, it’s needed to create a function marked as bench, taking a Bencher object, which will do the measurement. Thanks to that initialization of objects can be done outside of the measured code fragment. At the time of writing this text, benchmarking is an unstable feature and needs to be enabled using the test flag. More information can be found in the test library features in the Unstable book.

Documentation tests
Rust also provides rustdoc, which is an automatic documentation generation system, similar to Doxygen or Sphinx. But one of the most common problems when documenting the rapidly moving codebases is to keep the documentation up to date. Rust tackles the issue by allowing one to include code samples with assertions, which can be easily compiled and evaluated. More information can be found in the rustdoc documentation.

Summary
Rust provides not only a lot of nice programming constructs, but also a really comfortable ecosystem that aims at solving practical problems that programmers have in their daily jobs; therefore, it shapes up to be not only an academic-only curiosity, but a tool that could be used in long running commercial projects. If you haven’t played with it, then try it out – Rust is not only a useful tool, but it’s also lots of fun.

Meet Thaumatec during Cloudfest in Germany!

Finally, Microsoft Cloudfest at Rust is just around the corner, meaning its next week and Thaumatec for sure will be attending such event. For a third year Cloudfest, takes place in different location but same objective; have fun while solving real life problems. Thaumatec in the other hand is excited in building further our expertise in the IoT and embedded capabilities and extent it with MS Azure IoT and cloud application modernization. After all there exists a broad market in those areas and great advances that can simplify many day to day tasks and few know about it, luckily Thaumatec can help you.

Cloudfest is the place to be next week, especially because Thaumatec will be there learning new ways to improve what we already know so well, embedded and IoT skills. However, our services on Azure IoT and cloud modernization are for data driven companies who embrace digital transformation; meaning everything is controlled by a cloud, monitored and even automated. Automation is just one of the qualities that Azure IoT and cloud modernization have, this also includes connectivity, 2-way communication and everything being handled from one place, the cloud.

Thaumatec’s visit to Cloudfest will help us comprehend more complex problems about Azure IoT and cloud modernization and will also give us a broader perspective about other areas such as universal acceptance, manage services and controlling data, enabling Big-data, machine learning and many more. Come and join the fun but most of all, let us talk about posibilities of MS Azure!

What’s new in The Things Network? What we saw during TTN Conference?

The Things Network is a unique in its kind, huge and publicly available global LoRaWAN™ network, connecting over six thousand gateways in more than 130 countries. Because LoRa technology is tightly coupled with the Thaumatec’s DNA, we didn’t hesitate a minute to save our spots at the annual event – The Things Conference – that took place in Amsterdam, Netherlands at the turn of January and February 2019. It’s been two long days, full of workshops and presentations explaining new trends and directions of the in LoRa industry.

We arrived at the venue in the very morning and quickly confirmed our assumptions regarding the activeness of the community. Walking by the conference booths we had a lot of great conversations with the representatives of leaders in the industry and just after grabbing a cup of coffee and making our way to the main stage we stumbled upon giant wall presenting multitude of recent, LoRa related hardware developments, being it new sensors, gateways, antennas or microchips. It took a significant amount of time event to just briefly review all the gadgets.

The main stage was where the most interesting stuff have happened, and there were indeed a lot of engaging announcements to be made by the event hosts and invited guests. The first thing that is important to notice is the launch of “The Generic Node” device, which is an answer to the problem of high TCO of the LoRa based systems. You no longer need to design, develop and fabricate a specific device to server your project needs. You can quickly deploy the generic device and focus on prototyping your solution. The Generic Node can be easily connected to the TTN network and provide you with telemetry data from many internal sensors, such as light, accelerometer, temperature, touch, humidity, etc. Want to use the LoRa technology without hardware design – The Generic Node is for you.

It is also worth noting that The Generic Node takes advantage of the other piece of engineering art that was presented at the conference, the ATECC609A microchip that provides hardware based security to the LoRa transmission layer. Introduction of the chip makes the device provisioning secure and makes it impossible to tamper with the embedded cryptographic keys.

Source: https://www.electronicdesign.com/iot/teaming-secure-lorawan

But the ultimate announcement was still there to be made. The founders of The Thing Networks decided that the LoRaWAN stack should become accessible for everyone to use. Therefore, just during the presentation on the main stage they made the GitHub repository of the V3 software stack public, giving everybody access to the implementation of the network stack that strictly follows the LoRaWAN Network Reference Model. What a news!

In between the engaging presentations we had a chance to participate in several workshops. One of them was related to the Azure IoT Hub and the ways one can utilize it to quickly develop IoT applications in the cloud. During the hands-on workshop specialists from Microsoft explained how you can connect your IoT device to the Azure Cloud in literally less than 30 minutes and that even included flashing the device with appropriate software. Once your telemetry data land in the cloud you can use straightforward drag&drop mechanisms to quickly prototype your solution.

Finally, at the end of the second day just before leaving the conference, we were once again nicely surprised. Everyone who attended the event was given a brand new LoRa gateway, whose launch was also announced during the conference. This is the fully functional, indoor LoRa gateway which is going to be available for purchase at the disruptively low price. It’s amazing that you no longer need to spend a lot of money on a gateway nor you need to have professional hardware development skills to start your adventure with LoRa communication.

All the information above is just a small excerpt of the entire content available during The Things Conference. If you plan to extend your LoRa related knowledge or you are just going to get acquainted with the technology we definitely recommend visiting the conference next year.

Our thoughts on ECS 2018

At the beginning of November, we visited the Embedded Conference Scandinavia that took place in Stockholm, Sweden. It has been two days very well spent on listening to fascinating lectures and chatting with the endless amount of professionals that came there to present their most recent activities in the field of the embedded world.

This year was even more interesting since we decided to join the noble group of presenters and share our thoughts and experiences with the LwM2M protocol in the IoT world. If you follow our blog you could definitely see that LwM2M is the next big thing in the area of device management.

The exhibition hall has been crowded since the very morning, confirming the forecast that the IoT industry develops very promptly these days. While wandering between the exhibition booths we were amazed by the amount of thought and ideas that have been put into all kind of different products presented there. Sensors, displays, antennas, cameras, drones, industrial automation devices… you name it. Taking into consideration that at each booth you could meet with the helpful and engaged staff we found it hard to find some time to participate in the lectures.

Speaking of lectures – this year the ECS team organized more than fifty presentations covering a broad variety of topics, including wireless communication, security, quality, etc. But the conference was not only about gathering the theoretical knowledge. If you prefer to make your hands dirty, you could join one of the six workshops and get acquainted with new technologies the practical way.

Spending two days surrounded by the novelties in the embedded business is very exciting. You have a chance to grow your contact network very rapidly and get back in touch with the people you met during the previous editions of the conference. We were also pleasantly surprised that some of our customers have also attended this year’s edition of the gathering.

You can clearly see that Stockholm, Sweden is on the bleeding-edge with regards to innovations in the embedded world and we cannot wait to visit the ECS next year!

From Team Projects Conference to a job in Thaumatec

Personal development of employees is a very important value for our company. Last summer, two of our new employees wrote a beautiful story – from being a part of the university project sponsored by Thaumatec, to a contract with our company – everything in 4 months. We are preparing some new internship offers (also for non-IT people), so stay tuned -maybe you will become our next internships star?

First time we met Arek and Piotr in February when we decided to sponsor (again) Team Projects Conference on Politechnika Wrocławska. Today, 4 months after this, we asked them a few questions about their story and Thaumatec experiences.

What was your first contact with Thaumatec?

In our University of Science and Technology, each student is obligated to pass the course called “Team engineering projects”. Students have to prepare a working project of software or hardware. From the very beginning, we wanted to realize some serious project – that’s why we decided to choose a complicated hardware project recommended for a team of 4-5 people.

In the beginning, we were a bit scared, but we treated it as a challenge. Our course coordinator was Maciej Nikodem, one of the Thaumatec partners. He showed us the list of possible projects and we decided to create a mobile LoRa gateway – mostly because the IoT industry is growing very fast, and this project could give our careers a rocket start. The huge community of LoRa network was also an advantage. It was definitely a challenge for us because before the project, we didn’t work with wireless communication before, just Arek did some small projects in the past.

Tell us something about your project:

We were very surprised by how professional it was – we had weekly reviews, sprints etc. The idea was to design a mobile LoRa Gateway – weatherproof and without a necessary source of power. We see a niche on the market for LoRa gateway possible to use everywhere – for example, to manage crops or animals.

The device is based on Raspberry Pi – we added solar panel, battery, charging controller. In the same time, we were working on the web app to manage the gateway from the browser. Also, solutions on the market were quite expensive, so we decided to build the protecting construction with our hands. We think that LoRaMIG is quite unique – we saw similar constructions but most of them needed a separate source of power.

During our work, we experienced a few problems but we solved all of them. The biggest one was with the converter used to communication with the solar panel. We couldn’t find the one who is working properly with Raspberry Pi. We tested a few of them (even industrial ones) + few kernels. In the end, we found the good one, which is not using too much energy.

What happened to your project later?

At the end of Team Projects Conference was a contest for the best projects – we won it in the hardware category! Next, we had to finish our Internships (obligatory for every student) and we wanted to continue our work on LoRaMIG with the support of Thaumatec. We were very happy that the company agreed to that.

During our internship we were developing our LoRaMIG device, learning more about LoRa and wireless communication law in Poland. Also, we were testing the range of LoRaMIG and some sensors for other LoRa based projects – we have to say that those tests were very promising.

And then you got the job offer in the company?

Yes, after internship we received an offer to stay in the company and…here we are. We are working now on the new internal project.

What’s good about working in Thaumatec?

We always have a huge technical support from all employees – if we needed knowledge or some devices – it is never a problem. Thaumatec have a huge team and we can learn a lot from our friends. We very like a good atmosphere in the office – almost like in a family. As a company we are developing many interesting projects, they are changing quite often so always there is a new challenge on the horizon.

What plans do you have for the future?

We are still studying. Thaumatec has an office very close to Politechnika so it’s comfortable for us to combine studies with a job. We are currently on the last semester of engineer studies and we have a plan to continue them with the master’s degree.

Arek: I’m doing my engineer project based on the cooperation with Thaumatec – it will be a LoRA based sensor platform.
Piotr: I’m doing a project of the platform to manage LoRa gateways.

Hawkish on RISC-V

Embedded devices are Thaumatec’s bread and butter so we’re always super interested in the up-and-coming stars in the embedded hardware space. RISC-V is shaping to be exactly that and this makes us excited. In this blog post, I’ll take on explaining what it is, how it’s destined for greatness and ultimately why we think RISC-V matters.

What’s RISC-V anyway?
When you say embedded you typically think ARM: instruction architecture for platforms spanning from ultra-low power to power horses capable of competing with Intel and AMD’s offerings. ARMs are everywhere from netbooks to mobile devices to toasters, and for a very good reason: they’re proven and by this point very well understood.

Sure, Intel is eyeing every opportunity to jump on the mobile bandwagon but so far their ability to penetrate this market is middling at best. There are other alternatives to ARM – AVR, MIPS, Sparc, OpenRISC or ARC – but they all seem to be pigeon-holed into a single, narrow niche without the widespread appeal of the ARM.

This state of affairs is in part the result of how IP licensing business is structured. With the advent of fabless manufacturers the ability to iterate with proven design – especially given how time-consuming integration process can be – ARM is the safest bet to make. The recipe for success is simple: build a SoC around the ARM ecosystem, ship it to TSMC for fabrication, profit.

There are dozens of shops like these in China alone and they make a steady profit. Note however that hardware is useless without the software and community around it and that is something ARM has in spades. With ARM you get great support from compilers and tools developers. It’s not surprising that it is hard for the newcomer to negotiate this moat.

How is RISC-V different?
But when it comes to the economy of scale of the low-end devices shipping in hundreds of millions the only thing that’s better than understood and inexpensive is free (or at the very least more flexible). Manufacturers of the SoCs for the low-end devices operate on slim margins and are entirely at the mercy of ARMs licensing business. After all, in order to ship ARM-compatible cores, you have to pay for the license up front and on top of that pay royalties for every unit manufactured.

This is where RISC-V enters the stage. The effort spearheaded by the University of California, Berkeley sometime around 2010 is already an interesting alternative to the ARM. The idea behind RISC-V is an ambitious one: create an extensible, robust instruction set architecture for people to implement in silicon and release it on a permissible license. It’s not the only project that decided to go this route (J-core effort immediately comes to mind) but it’s the most promising one so far.

With good support from gcc and with LLVM backend in the works, RISC-V can already host both Linux and FreeBSD. Its ISA is extensible and supports everything one would want from a modern, scalable CPU: 32- and 64-bit integer instruction sets (with embedded 16- and 32-bit subsets, as well as 128-bit one, being worked on); support for atomic operations; single- double- and quad-point integers; compressed instruction set and many more. A lot of thought has been put to make RISC-V design sturdy and usable in the entire spectrum of solutions ARM is used for today (and then some).

Remember though that this is architecture specification only. What’s needed are actual realizations of the ISA. Thankfully RISC-V has this covered too. There are about a dozen liberally licensed implementations one can feed FPGA with or base ASIC design on. There are physical devices with RISC-V cores shipping today too in the form of FE310 and U540 cores from SiFive (used in HiFive1 and HiFive Unleashed respectively) or GAP8 from GreenWaves.

Sceptics were quick to point out that SoC is more than just a CPU. Much more. At the very least you need a system bus, memory controller (and memory in general), various I/O hardware components from low- to high-speed (I2C, SPI, GPIO, USB,…), MMU and quite a few other bits and pieces. Once you get there you’ve got a completely open solution.

This end-to-end openness is likely of great importance only to the truly paranoid, people worried that NSA is rummaging through their pony collection using capabilities of the notorious Intel Management Engine. And I’m not dismissing their fears, nothing like that, it’s simply that we all know that Rome wasn’t built in a day.

It turns out, however, that Chennai is the Rome or RISC-V. Indian Institute of Technology Madras undertook a herculean task of building the entire SoC around RISC-V from scratch. SHAKTI Processor of IITM is proudly open source and free of charge thank you very much.

But SHAKTI project is more than just a CPU. It’s ditching AMBA, the de facto standard bus ARM has designed, and is going for Gen-Z instead. It includes open source SSD controller (based on SHAKTI cores of course), 10G and 25G ethernet components as well as many others and comes in many variants, from embedded to server-bound, from trivial to fault tolerant and aerospace-ready. There’s also a Rust-based OS in the works in case one needs a complete, secure environment. And if you want to drop SHAKTI core into an existing design with SRIO or PCIe, they’ve got you covered as well.

RISC-V matters
It seems that even ARM accidentally validated the viability of RISC-V when its marketing geniuses launched a hit-piece website with “facts” about RISC-V. Swiftly pulled from the Internet it shows that, if nothing else, the needle has been pushed a little. Historically hardware has been all about walled gardens but this is changing before our eyes. We may not achieve perfect hardware openness (not any time soon at least) but perhaps we don’t have to.

With RISC-V maturing further we’ll see more project using it which in turn will lead to a vibrant ecosystem coalescing around it. Both governments and private sector are investing heavily in RISC-V, betting on its success. RISC-V Foundation boasts an impressive set of members including the likes of Google, Nvidia, Samsung and Western Digital among others. The market is hungry for embedded processors with vibrant communities around it. But most critically the appeal of RISC-V doesn’t end there.

We’re really interested in opportunities that are bound to arise from the maturity of RISC-V. It’s no secret that a lot of embedded projects today boil down to integrating well understood, off the shelf solutions. And that’s not a complaint either – there’s a lot of potential for creative output around such projects. But bringing up a new platform is a completely different beast altogether. And one that we all should be excited to tame. Not that long ago open hardware seemed like an unattainable goal with a very high price of admission for those who attempt it. RISC-V lowers this barrier and that’s going to benefit us all the same way open source software did. Good times.

Now. Can you spot where Harry Potter got paraphrased? 🙂

LWM2M fundamentals

IoT Expansion

According to most of the forecasts presented by Forbes, IoT market is going to develop rapidly in the coming years. Some forecasts say that it will grow from $157B in 2016 to $457B in 2020. Such a rapid technology development requires huge work targeted at the unification of the ways we manage the billions of devices connected to the global network.

Easy communication between all these devices is a requirement to keep the market and entire IoT industry growing at even higher rates. Therefore since many years, there is a huge amount of work put into standardization on the field of device management systems.

Device Management
In 2002 Open Mobile Alliance (OMA) was founded in order to streamline the device communication protocols of that time like WAP, SyncML, etc. In case of device management field, a special standard has been announced, called OMA DM. Currently, it is already at the second generation (2.0) and, since it has been evolving for many years it is now considered partly bloated. There are users complaining that from the IoT point-of-view some parts of the protocol are superfluous.

Additionally, OMA DM has its origins in the SyncML technology that uses XML to format a message and protocols that are considered heavy for the message transfer (HTTP, OBEX, WSP, etc.). Initially, OMA DM has been targeted for mobile terminals, but devices used to build an IoT network, such as tire pressure sensors, light bulbs, digital street signs are usually more constrained resource wise, mainly because they are expected to consume as less energy as possible and be cheap.

Parsing XML along with all accompanying meta-data and negotiating TCP connections are tasks that are often beyond the capabilities of such constrained devices. Even the latest version of OMA DM that allows replacing XML with JSON doesn’t solve the problem, although helps a bit. Reduced payload still needs to be transformed through resource consuming TCP.

LWM2M
Because of the aforementioned reasons in the year, 2012 OMA started working on the lighter protocol that would fit well into the environment of constrained devices. The first draft of the proposal for Lightweight Machine-2-Machine (LWM2M) specification surfaced soon. Its main assumptions are:
Low demand for device resources in terms of memory, networking system and computing power
Open and extensible object model available for the entire industry
Security and reliability even in spotty, constrained network connections
The same communication channel used for data and control plane
Devices that communicate via LWM2M no longer need to provide support for TCP based HTTP protocol, because all information could be transferred via a lightweight protocol called CoAP (Constrained Application Protocol) which is based on connectionless UDP. By design, UDP is generally much less complicated than TCP and puts less pressure on the devices that use it.

Designers of the LWM2M protocol have also taken care of the bloated message format and reduced the meta-data accompanying the messages to the minimum. Data that is transferred very closely resembles the pure byte stream instead of bloated messages based on XML or JSON. This approach greatly reduces the processing power required to transfer and serialize/deserialize messages. LWM2M users no longer need to pay the square- or angle-bracket tax, however, when one feels that he has enough processing power, he can still use JSON or TLV (tag-length-value) as this is not forbidden by the protocol.

Security in LWM2M is primarily based on the DTLS protocol (Data Transport Layer Security), whose security guarantees are very similar to the widely used in the Internet TLS protocol. LWM2M support credentials based on pre-shared secrets, raw public keys or certificates. LWM2M protocol stack and architecture

The resource model of LWM2M is also more simple compared to that of OMA DM. The protocol allows to define:
objects (O), for example, “temperature sensor”
instances (I) of objects, for example, “sensor #1”, “sensor #2”
resources (R), for example, “current temperature”
Such model directly maps to the CoAP Universal Resource Locators (URLs).

Currently, there are several standard LWM2M objects defined by OMA, therefore if one is requesting a resource from the “/5/0/3” he will be presented the state of the device firmware update:
Object #5 – “Firmware” – urn:oma:lwm2m:oma:5
Instance #0
Resource #3 – “State”
The protocol specification clearly defines what values could be stored in the resources and under which circumstances. Below please find an official definition of the “Location” object taken directly from Open Mobile Alliance webpage. Entire object registry in its current state could be accessed via this site.
Location
Description
This LwM2M Objects provide a range of device related information which can be queried by the LwM2M Server, and a device reboot and factory reset function.

Object definition
Name Object ID Object Version LWM2M Version
Location 6
Object URN Instances Mandatory
urn:oma:lwm2m:oma:6 Single Optional
Resource definitions

ID Name Operations Instances Mandatory Type Range or Enumeration Units Description
0 Latitude R Single Mandatory Float Deg The decimal notation of latitude, e.g., -43.5723 [World Geodetic System 1984].
1 Longitude R Single Mandatory Float Deg The decimal notation of longitude, e.g., 153.21760 [World Geodetic System 1984].
2 Altitude R Single Optional Float m The decimal notation of altitude in meters above sea level.
3 Radius R Single Optional Float m The value in the Radius Resource indicates the size in meters of a circular area around a point of geometry.
4 Velocity R Single Optional Opaque The velocity in the LwM2M Client is defined in [3GPP-TS_23.032].
5 Timestamp R Single Mandatory Time The timestamp of when the location measurement was performed.
6 Speed R Single Optional Float Meters per second Speed is the time rate of change in position of a LwM2M Client without regard for direction: the scalar component of velocity.

Summary
Even though LWM2M is still relatively young it already proved its value and is currently used by lots of key players in IoT like Microsoft (Azure), Ericsson (DDI) or Huawei (OceanConnect). The protocol is constantly developed and there is an ongoing discussion about it next features. This makes is the good candidate to become a fundamental protocol used in IoT industry.
Among the features for future development are:
transfer object meta-data in order to reduce the vendor’s dependence on the availability of the public object register
method of providing the LWM2M server with information about the availability of the optional object resources or possible values that could be written to the resource
Thaumatec will definitely keep an eye on the further development of the LWM2M protocol. If you also want to play with LWM2M a good starting point would be to use some open-source libraries like Leshan (server) or Anjay (client library).

“Developers-dedicated travel agency”

We know, in all those devs-dedicated jobs offers you can read something about international projects. But what’s the best in the job of a developer working for international customers? Possibility to travel abroad. In Thaumatec it’s possible even if you are not on the super-senior level. That’s why last month we’ve sent a bunch of our guys to the USA. How cool is that?

It’s not a rare case in our company – we have few customers from Silicon Valley, so we are travelling to visit them quite often. This time we send six our employees to Los Gatos – an incorporated town in Santa Clara County, California with headquarters of Netflix.

“According to Bloomberg Businessweek, Los Gatos is ranked the 33rd wealthiest city in the United States. It is located in the San Francisco Bay Area at the southwest corner of San Jose in the foothills of the Santa Cruz Mountains. Los Gatos is part of Silicon Valley, with several high technology companies maintaining a presence there.”

Our developers went there for a month, to support our customer working on biometric security systems with the demo of their product. They left the company at the end of February, but it’s sunny California – so even then they had amazing weather.

Los Gatos looks like the best place to explore California – located one hour from the ocean coastline is perfect for surfing lovers, one hour from San Francisco – good for those who never seen Golden Gate before and slightly more than three hours from amazing Yosemite National Park for all wanting more contact with nature.

Our guys were especially impressed by two things – they visited Getty Center, a campus of The Getty Museum, an art museum in California opened in 1974 by J. Paul Getty. The museum is full of fascinating artworks, like for example Vincent Van Gogh painting “Irises”. Another cool place to see in California, recommended by our developers is California Space Center. Visitors have a possibility to see there original space shuttle – Endeavour.

If you will ever have a chance to visit Los Gatos (it’s quite easy when you are working in Thaumatec, you can find our open positions here), you have to try some good food there – our guys really recommend to see how rich in cultures is local cuisine. For example, they still can’t forget amazing steaks, but also – excellent food from Ethiopia and something called “Korean barbecue”. Sounds tasty, huh?

All the trip ended up with very tasty garden barbecue with our customer. Want to try it too? Join Thaumatec!

Programming Atari

8-bit Atari computers (referred to as “Atari” from now on) were created to bring entertainment and simplify work in the households in the eighties and early nineties. The Atari catchphrase “Power without the price” was surprisingly well related to reality. Computers were quite cheap as for that times (not for the people living in constrained market conditions in Poland, though) and well-designed to actually provide the promised power.

The Atari catchphrase

Architecture
There were several Atari 8-bit models available on the marked across the years, although the internal architecture did not significantly vary. Among the most important changes are:

  • Internal RAM extended from 64 KiB to 128 KiB in most modern models (130XE)
  • Chip optimization (several chips replaced with single, more integrated one)
  • Cosmetic changes (the chassis was visually adapted to match the more modern 16-bit Atari computers while retaining the same internals

Atari was (and still is) powered with MOS 6502C CPU clocked at 1,733 MHz (PAL, 50 fps) or 1,79 MHz (NTSC, 60 fps). Most computers of that era had to be aligned with TV-signal frequency for the sake of simplicity. It means that most of the software runs faster on the NTSC based machines (music plays quicker, games are more difficult, etc.).


Atari 65 XE mainboard (source: http://oldcomputer.info)

The main CPU is supported by several auxiliary chips. At least three of them together with their functionalities are worth mentioning:

POKEY that provides:

  • Four independent sound generators (channels)
  • Keyboard handler
  • Serial input/output
  • Pseudorandom number generator

ANTIC:

  • Programmable GPU with its own opcodes, registers, etc.
  • Generates graphical output
  • Manages character sets
  • Could be temporarily disabled to speed up CPU processing (disabled ANTIC didn’t steal CPU cycles and didn’t block memory bus)

GTIA

  • Translates data provided by ANTIC to TV-signal
  • Could add an overlay with its own graphic elements (players and missiles, also known as “sprites”)
  • Detects collisions between screen elements (sprite vs sprite, sprite vs playfield – the feature of great meaning for all game developers).

Since the creators of Atari wanted it to be much more than just another toy to play games, they decided to equip the computer with fully fledged and extensible software. Each Atari computer came with an integrated BASIC interpreter that made it easy for everyone to write its own programs, for instance, for home budget management. You could start programming after one second from flipping the power switch.

Also, the bundled Atari Operating System had a lot of innovative features, including but not limited to:

  • The ability to be disabled. Yes, at any point you could decide to turn the OS off and work with the bare machine, having a lot more resources at your disposal (OS is not occupying memory and is not stealing precious CPU cycles)
  • Support for device drivers. Communication with external devices could be done via a driver installed in on of the available slots. The driver was assigned a letter (A:, B:, etc. – just like MS-DOS) and had to implement the standard interface. Thanks to that approach, even the decades-old software could easily communicate with modern devices (SD cards, Bluetooth, network, laser printers).
  • Floating point math package. MOS 6502C was not capable of doing the floating point math at all, but thanks to the mathematical package included in the operating system one could immediately start doing some serious math (trigonometry, logarithms, square roots or just floating point multiplication and division). Some of the math procedures were created by Steve Wozniak himself.

Community
Atari community is still very active these days. Poland belongs to the top countries in terms of the number of community members, gatherings and software created. There are several gatherings in Poland that are organized in regular, most often annual, basis. Among the major ones are:

  • SillyVenture – world’s biggest Atari community gathering organized annually in Gdańsk. In 2017 there were more than 200 visitors from various different countries joining the party. Just for this single occasion, they created 116 different software pieces (games, demonstration, etc.). SillyVenture also hosts talks and lectures from the legends of the Atari scene
  • WapNiak – Small and placid party with long tradition organized annually in Warsaw
  • Ironia – Annual gathering of Atarians organized near Wrocław
  • Forever – Huge Atari gathering organized by our fellows from Slovakia

The most important part of each gathering is so-called “competitions”. They consist of few different categories (music, graphics, game, etc.) so that every participant could prepare something he would like to present to the whole community. All works are then presented (usually during Saturday night) and everyone votes for his favourite creation. Winning such competition guarantees lifetime glory and provides recognition among the community members from around the world.


Recently released game Tensor Trzaskowskiego – winner of Ironia 2017 Atari Party

And when I say that community is active, I mean active. To give you some numbers: just for the 8-bit Atari alone, there were 81 new games created in 2015 and 87 games in 2016. That gives nearly 2 new games per week.

Besides the big gatherings, there are also smaller, local ones organized ad-hoc without much planning. They are more or less regular in every region of Poland with strong Atari community (Trójmiasto, Warszawa, Kraków, Wrocław, Górny Śląsk).

It’s not only software that is actively created, also the hardware guys are very prolific and are constantly creating devices that make it easier to use Atari with modern hardware and technologies like hard disks, ethernet, LCD TVs, etc. The devices and extensions that I consider most valuable for every Atari users are:

  • Stereo – gives Atari one additional POKEY chip to drive left and right sound channels separately
  • SIO2SD – with this device and single SD card you can emulate up to 16 disk drives and quickly load any piece of software
  • SIDE2 – similar to SIO2SD, but sacrifices a little bit of compatibility for much greater, nearly instant, loading speeds
  • VBXE – so-called “new graphics card” that enables Atari to manipulate graphics in the 16-bit way (blitters, big and colourful sprites, sharp picture).
  • RAM extensions – different kinds of devices that extend the standard 64 KiB memory up to 4 MiB
  • DragonCart / WiFi Cart – with these devices you can connect Atari to ethernet or WiFi network
  • Rapidus – Turbo card that equips Atari with 20 MHz CPU and 32 MiB of linear memory

Programming Atari
There are not many people who actively use Atari for doing the actual software development. Machines of that kind are so severely limited that creating comfortable tools (editors, debuggers, etc.) is nearly impossible. Naturally, such tools exist, but they tend to use all available resources, leaving barely any place for the program is actually developed. Sure, we have hard-drives, memory expansions, RAM-disks, 80-column text mode, but it’s very easily beaten by a modern PC with contemporary tools. So if you’re not the nostalgia-driven nerd but want to do some real job, configure cross-compiler for Atari on your PC.

Debugging with Altirra emulator
Such cross-compiler will crunch the 6502 assembly code and create a binary file that could easily be loaded on Atari. Actually, you don’t even need Atari, because there are several emulators in the wild, some of them still developed and maintained until this very day. Most emulators provide powerful tools that greatly improve the speed of development and debugging. Step-by-step program execution, processor execution history, CRT laser beam location preview, breakpoints, integration with modern IDE, real-time memory manipulation just to name a few. In addition, such an emulator (when asked to) could be orders of magnitude faster than real Atari in terms of CPU processing or I/O operations.

Besides emulators, one has the following tools at his disposal:

  • Wudsn! – Eclipse IDE plugin that integrates the 6502 cross-compilers and Eclipse IDE
  • g2f / Rastaconverter – tools for creating graphics for Atari. They use the power of modern PC to optimize graphics in such way that the old hardware could generate colourful and detailed images. You could just drop some .jpg graphics at these tools, wait several hours and receive the 6502 code that displays beautiful graphics on Atari. Yes, you receive “a code” since creating graphics on Atari is, in fact, a programming task since you need to take care of interrupts, changing colours on the fly, tracing the CRT laser beam, etc.
  • RMT / other music trackers – tools for creating music for Atari, taking advantage of 4 or 8 (stereo) independent music channels. Also in case of music one can use the power of modern PC to optimize track layout and instrument definitions just to minimize the memory footprint of the music track. Every single byte counts.
  • CC65 – C compiler for 6502. The decent option if the speed of pure assembly is not required, yet the easiest approach via Basic is too slow.
  • MadPascal – brand new cross-compiler that creates 6502 assembly from Pascal language


Atari image automatically generated from .jpg file

It is worth mentioning that also in the past when Atari was on the spot, big software houses like Lucasarts or our domestic L. K. Avalon used bigger machines like Amiga to develop their games for 8-bit Atari.

Curiosities
Despite having relatively simple architecture and being a nearly 40-year-old computer, Atari still doesn’t cease to amaze people who actively use it. Most motivated guys still reach behind the well established limits and keep discovering features that would render the creators of Atari speechless. Some of the tricks are possible only due to bugs in the hardware design and are very volatile, i.e. could be easily reproduced only on a particular piece of Atari or only when using Atari with chips produced at particular point in time.

Here are some of the most interesting examples:

  • “Hairdryer mode” – it turned out that some of the internal integrated circuits behave differently than expected when they reach certain temperatures. The effects are not noticeable during normal usage, but could be exploited by programmer to achieve otherwise impossible results that include: shifting image half-pixel left or right or generate additional colors. During normal operation Atari would require 3 to 4 hours to reach target temperature, but using hairdryer it only takes 10 to 15 minutes – hence the effect name
  • 60 fps video playback – thanks to two different and unique features on board, Atari could be used to perform 60 fps video playback with 192×160 resolution. First necessary feature is remappable video memory – ANTIC chip could be instructed to read video memory data from any place. Second necessary feature is the cartridge slot that allows the cartridge memory to be mapped directly into Atari address space. These two features together with some software wizardry allow programmer to replace video frames in memory with intervals so cleverly calculated to provide seamless video experience
  • Atari was famous for its long loading times and data transfer fragility, especially when transfer from cassette was considered. It was said that when you load from cassette you should leave the room, do not sneeze and do not make loud noises. Only recently, after the Atari operating system was reverse engineered it turned out that there is a nasty bug that sometimes prevents correct data transfer even in perfect conditions and with perfect hardware. During the transfer Atari constantly measures the speed of a moving magnetic tape and compensates for cassette recorder engine fluctuations. Unfortunately there are some corner cases not covered in the algorithm that could cause Atari to calculate some totally incorrect speed and crash the loading process, whenever you’re sneezing or not.
Copyrights © Thaumatec 2025