5 Things You Need To Know About Wearable Medical Devices

The science fiction of yesterday is the reality of today. Imagine your driverless car pulling up to your house. You get out of the car and walk to your front door, you unlock your smart lock with just your thumb. The smart lights come on automatically as you walk through the house, the temperature is regulated by your smart thermostat. Your glucose monitoring system beeps, warning you that your glucose level is low. You say ‘Alexa, play some Bach’ and music fills the room as you start to prepare a snack…    

The Internet of Things (IoT) and artificial intelligence (AI) are rapidly transforming many aspects of our daily lives. With everything from virtual assistants like Siri and Alexa, to blood pressure monitors and other wearable devices, to smart locks, smart lights, smart thermostats and other smart appliances, to connected and autonomous vehicles, few of us in the developed world can claim to be unaffected.

Wearable technology is part of this exciting and quickly expanding sector. According to Grand View Research, the global wearable technology market size was valued at US$ 32.63 billion in 2019 and is projected to expand at a compound annual growth rate of 15.9% from 2020 to 2027. Wearable fitness devices, such as activity trackers and smartwatches, are increasingly popular among consumers. An example of this trend is Google’s acquisition of Fitbit, Inc. (US consumer electronics and fitness company) for a hefty US$ 2.1 billion. But it’s not just the big players who can reap the benefits, there are many exciting opportunities for both start-up and mature companies.

More and more, wearable technology is moving towards the healthcare sector and the line between wearable devices and wearable medical devices is not always clear. There is potential for companies to join this growing market, but it may seem difficult and complicated. There are regulations and standards that need to be respected, which means that companies developing wearable technology need to be clear about what wearable medical technology is, and whether they are crossing the line into that domain.

So, if you are thinking of entering this market, here are a few points to consider:

Know what a wearable device is

Wearable technology, wearable devices or wearables are smart electronic devices worn on or close to the skin to detect and monitor body signals such as vital signs and ambient data, giving feedback to the wearer. They are in fact tiny wearable computers. A popular example are activity or fitness trackers. These devices or apps measure activity related data such as distance covered, heart rate and calories burned, helping the wearer meet their daily fitness goals.

Know what a wearable medical device is

A medical device must have a medical purpose. According to the EU’s Council Directive 93/42/EECof 14 June 1993 concerning medical devices, a medical device is “any instrument, apparatus, appliance, software, material or other article, whether used alone or in combination, including the software intended by its manufacturer to be used specifically for diagnostic and/or therapeutic purposes and necessary for its proper application, intended by the manufacturer to be used for human beings for the purpose of:

  • diagnosis, prevention, monitoring, treatment or alleviation of disease,
  • diagnosis, monitoring, treatment, alleviation of or compensation for an injury or handicap,
  • investigation, replacement or modification of the anatomy or of a physiological process,
  • control of conception,

and which does not achieve its principal intended action in or on the human body by pharmacological, immunological or metabolic means, but which may be assisted in its function by such means”.

Some examples of wearable medical devices are:

  • Wearable devices for diabetics. They include continuous glucose monitoring systems (CGMs) that provide real-time glucose readings and automated insulin delivery systems such as insulin pumps that automatically adjust background insulin (example Medtronic).
  • Wearable cardioverter-defibrillators (WCD) for heart patients who are at risk of sudden cardiac arrest. The WCD can detect dangerous arrhythmia and deliver high-energy shocks to save the patient’s life.

Understand that software can be considered to be a medical device

As can be seen in the above-definition of a medical device, software could be considered to be a medical device. Directive 2007/47/EC states that “it is necessary to clarify that software in its own right, when specifically intended by the manufacturer to be used for one or more of the medical purposes set out in the definition of a medical device, is a medical device. Stand alone software for general purposes when used in a healthcare setting is not a medical device”.

What this comes down to is:

  1. Software embedded in medical devices must meet the same
  2. criteria and certification as medical devices.
  3. Stand alone software for generic devices can also be considered to be a “medical device” (as defined above – Council Directive 93/42/EECof 14 June 1993 concerning medical devices). So if you create an app that is used for diagnostic and/or therapeutic purposes, it will be considered a “medical device” and subject to the same regulations and standards as medical hardware devices.  
  4. General purpose software, even when used in a medical environment, is not considered to be a medical device. For example an app providing reference information for a patient or physician; an app connected to a wearable fitness device to monitor the general well-being of someone and not a specific disease; and software for non-medical purposes such as e-mailing, word processing, web messaging etc

Certification process

The production of medical devices requires compliance with regulations and standards that ensure the safety and efficacy of the devices.

In Europe, the most relevant regulation for this category of devices is Regulation (EU) 2017/745 on medical devices (MDR). In the United States, the U.S. Food and Drug Administration (FDA) has the Quality System (QS) Regulation.

ISO 13485 is an internationally recognized Quality Management System (QMS) standard for producing medical devices. Compliance with ISO 13485 is often seen as the first step in achieving compliance with European regulatory requirements.

In 2018, the FDA annoucned its plans to harmonize the FDA QS Regulation and ISO 13485.

If you’re involved in the development, design, distribution or servicing of medical devices, being ISO 13485 certified demonstrates that your company’s QMS is designed to deliver consistent, high quality products.

As is the case with other ISO standards, organizations wishing to obtain the formal ISO certification need to go through a learning process, then put in place the adequate management systems and seek certification through an accredited body. This can be a lengthy and costly process.

Partners can help

All this may seem like a lot to take in and quite complex, especially when it comes to regulation and certification. But, did you know that if you want to design and make medical devices, you could also work with a certified partner?

Thaumatec is ISO 13485 certified to build software for medical devices.

So, if you’re planning on designing a medical wearable – or transforming an existing wearable into a medical wearable –  but do not have the necessary certification, get in touch with us and we’ll use our knowledge and certification to help you out.

In our next blog post we’ll take a closer look at regulations and standards, in particular, ISO 13485 and the EU MDR.

10 steps to successfully start international cooperation

The pandemic struck in 2020.  Corporate strategies morphed from focusing on development to survival.  Items like allowing employees to work from home and online e-commerce became a strategic IT priority. Hence, despite the pandemic, it was business-as-usual.  The concept of working remotely made many organizations realize that the proximity of their business partners and suppliers is not that crucial as it was thought to be. In most cases, local partners and suppliers were favoured over international companies. A local business partner has the advantage of being in the same jurisdiction, observing the same laws and regulations, operating the same currency, and speaking the same language.   

However, in most situations, looking for the best partners internationally can bring huge benefits to a business in optimizing quality, availability, and cost. This is especially true when it comes to software development.

Initially, there were technical concerns but improvements in the quality and availability of international links and supporting technologies such as Zoom have reduced them significantly. 

There is a concern that working remotely can bring communication and coordination issues, particularly in multi-cultural environments.  We have been working in the international arena for over fifteen years, gaining significant experience and have developed a 10-stage approach that we apply to each project to ensure maximum cooperation. 

The 10-step approach is as follows:

  1. Start with a test project with measurable goals and objectives. 

A successful international cooperation initiative starts with self-contained, relatively brief projects with clear requirements.  Application module projects where costs or time-to-complete schedules are known to provide an excellent means for testing the deliverables and measuring results.

  1. Ensure internal buy-in and involvement. 

Without the buy-in and active participation of internal stakeholders, it is impossible to create the type of collaborative environment that characterizes successful international cooperation initiatives.  The continued involvement of internal stakeholders with the knowledge of potential risks and rewards of international cooperation is important.

  1. Review and document internal processes. 

A common problem in working with international partners is that internal organizations often operate under informal processes described in the local language, making it difficult to collaborate with outside suppliers. Before selecting a third-party supplier, a company should assess its internal processes and identify and document where functions intersect. The company should also document how information flows throughout the internal software development life cycle.  This will provide an opportunity to refine and adjust internal processes where necessary. 

  1. Assign a dedicated project manager. 

Assign a dedicated client project manager, a single point of contact, to serve as the focal point throughout the entire project life cycle right from the planning and request for proposal (RFP) phase to acceptance testing and implementation.  Additionally, this individual should be an experienced manager to work closely with the international counterpart to solve day-to-day operational issues.

  1. Ensure organizational fit. 

It is also important to virtually meet with the individuals who will be working remotely to get a sense of how they will fit into the internal culture.  Reviewing the qualifications of prospective team members should be part of the process. 

  1. Fulfilment of documentation. 

Even with close and informal collaboration, it is important to have well-documented roles and responsibilities stating precise project requirements and clear project milestones.  A clear statement of deliverables is needed for project closure. 

This practice also applies to project tracking and oversight, configuration management activities such as version control, backup, and recovery and all other facets of the process. 

  1. Establish a secure infrastructure. 

Co-operating internationally requires a secure communication infrastructure and the use of collaboration tools such as email, chat, and intranet-based project Web sites. Choosing how work will be distributed and the specific development and network infrastructure should be done in line with the client’s security policies and development processes.

  1. Allow ample time and resources for knowledge transfer.

Knowledge transfer is a vital part of the process.  It not only ensures that the supplier’s staff members understand the client’s software but also contributes to the creation of a collaborative work environment that continues even after the completion of the project.  Additionally, long-term contracts that specify a periodic rotation of staff create a flexible yet knowledgeable base from which resources can be quickly drawn as needed.

  1. Acknowledge that cultural understanding is a two-way street. 

It is an absolute necessity in multi-cultural environments that employees be able to adapt and work effectively in other cultures. However, multi-cultural enterprises need to be aware of cultural issues such as if disagreements can be raised and how they are resolved. Some companies conduct internal cross-cultural training to create awareness around such issues. 

  1. Organize meetings and monitor performance and stakeholder satisfaction. 

Regular project status meetings in which the client and supplier team members review schedules and deliverables and resolve open issues are an essential part of international cooperation. They enable clients to stay on top of and maintain control over projects as well as track supplier performance.

How AI implementation will influence Thaumatec? – Interview with Michał Zgrzywa, Director of AI @ Thaumatec

Michał Zgrzywa, Director of the Artificial Intelligence department is already working in Thaumatec some time, so we couldn’t miss the opportunity to ask him few questions about the AI, it’s influence on the world and our company! Enjoy your reading!

Why can AI be important for the current and future customers of Thaumatec?

Michał Zgrzywa, Directof of Artificial Inteligence @Thaumatec:

– There are countless ways in which Data and AI can bring value to our customers, which is why all the largest analytics companies like Gartner or Forester include them on their list of most impactful technology trends for the upcoming years.

They all seem to agree that in the coming years, the AI revolution will bring severe changes to how we are doing business, communicate, develop ourselves, care for us and people around us and many more. The reason why the expected impact is so large, similarly to all the great technological revolutions of the past like e.g., industrial revolution, is that from now on machines will be able to perform tasks that were so far reserved only for humans. The only difference is that the previous revolutions concerned physical tasks, while the AI one relates also to a subset of cognitive tasks like analytics, recognition, forecasting, spotting trends, spotting anomalies, etc. And these tasks will be able to be performed faster, with more precision and at the fracture of cost.

What is necessary for most of the AI use cases is data. And this is where the current and future customers of Thaumatec are in a strong position – the software embedded in our client’s products and the IoT solutions with their HUBs can provide a lot of data. The data that can be turned into value for our clients and their end users.

Where AI already is bringing the value to product organizations around the world are:

  • improving the product ability to analyze and interpret the environment through the measured signals,
  • extending the product functionality with capabilities like image, sound, or natural text recognition,
  • enhancing the product with possibility to recommend basing on historical patterns, forecasts of the future and environment around,
  • automating the product by allowing it to make autonomous decisions based on data,
  • improving the product ability to recognize unusual behaviors or patterns,
  • understanding of the product usage patterns and issues, which leads to better understanding of the end-users needs, improved product and more satisfied customers,
  • understanding of differences between end-users, which leads to better segmentation and personalization of products and services offerings, to increase revenue and satisfaction,
  • analysis of the products operation patterns that leads to predictive maintenance and lower costs of operation,
  • analysis of the products utilization patterns that leads to recognition of abnormal and potentially fraudulent behaviors, which increases the product reliability and security,
  • finally, the product data may become a new product itself, allowing it to generate completely new revenue streams.

Finally, Thaumatec customers will benefit from having the whole skill set: embedded development, IoT cloud development and AI / data science in one integrated team. We will help our clients to move their products onto a new level of development and gain from the AI revolution instead of being threatened by it.

What new skills will we have as a company?

We will strengthen our company skill set in multiple areas.

First, we will introduce the role of Data Scientist. Such a person needs to combine statistical knowledge, understanding of tools and techniques used in Machine Learning, software development skills (Python, R) with business analyst mindset. The most common technologies and techniques that a Data Scientist knows are:

  • Computer vision: object detection, semantic segmentation, image generation; techniques: various architectures of CNN, GAN, transfer learning, autoencoders, TensorFlow, TensorFlow Lite;
  • Natural Language Processing: speech recognition, NL understanding like text summarization, topic modelling or sentiment analysis; techniques: TFiDF, Word2Vec, BERT, GPT-3 and many more.
  • Predictive modeling: time series forecasting, classification, regression; techniques: ARIMA, regressions, random forests, Xgboost, deep learning, and many more.
  • Optimization: Genetic Algorithms, Bayesian Optimization;
  • Recommendation engines: collaborative, content based, hybrids.
  • Anomalies detection: clustering, dimensionality reduction, isolation forests.
  • Simulations: Monte-Carlo, reinforcement learning.
  • Software development: Python (Pandas, NumPy, Scikit-learn), R;
  • Data Visualization (Matplotlib, Bokeh, Tableau, d3).

The second crucial role we will have is the Data Engineer. Their skills are mostly around retrieving, transforming, cleaning, and storing data. Often Big Data. So, the technologies that are quite common for a Data engineers are:

  • All kind of databases (sql, nosql), data warehouses (cloud, on-premises), data lakes and data transformation tools;
  • Cloud IoT tool stacks: Azure IoT Hub, AWS IoT Core;
  • Big data tool stack: Hadoop, Kafka, HDInsights, Spark, Dask;
  • Software development in general (Python).

Finally, the third role around AI projects is Machine Learning Engineer, whose major responsibility is model operationalization. This person builds the pipelines for model training and model deployment. They are also preparing the test and production environments (often dockerised, located in the cloud). The most common technologies are:

  • ML models training and operationalisation: Azure Machine Learning Studio, Amazon SageMaker;
  • Devops tooling: CI/CD tools, Docker, Kubernetes;
  • Software development in general (Python).

Many of the skills can already be found in our company. But there will also be space for personal development and recruitment.

What kind of projects can we support?

I can envision at least three kinds of projects.

The most exciting projects that we will focus on are development of intelligent products for our customers. Here, to our regular competences of building IoT solutions, we will add the part around training and incorporating the intelligent AI models. This will result in building the AIoT solutions (Artificial Intelligence of Things), that have a huge potential of bringing innovative competitive advantage.

The projects from the second category would aim at existing products that would gain significantly from adding an intelligent component to them. The common scenario in this case is as follows:

  1. we would extend the hardware and firmware to start gathering new data from the product,
  2. we would build the infrastructure that allows storing the data in the cloud,
  3. using the new and already gathered data we would train the intelligent AI models,
  4. we would incorporate the models into the web applications, gateways, or the device itself.

Such extremely complex projects as the above categories require exactly the supplier like Thaumatec – a company that has embedded, IoT cloud and AI skills in the one, well integrated team.

The third category of projects would be more focused on only one part – the AI. In such cases we would cooperate with companies developing their product but lacking the Data & AI competences. We would join by taking care of the AI component, thus helping the customer to achieve their goal.

RustFest

RustFest Conference in  Barcelona came to an end, so here is a summary of it.

Saturday

Saturday started with a keynote about the benefits and pitfalls of Rust adoption. The economic gain from the safety Rust offers is now beyond question but some new potential problems are just around the corner. One of the most interesting challenges discussed was what will happen and how to react when the language becomes mainstream and it won’t be used just by a group of loyal enthusiasts but also by unaware developers who happened to join a project based on this sometimes very demanding and difficult language.

A big surprise was to hear about the process of Rust adoption in China which sadly isn’t going well. All the beloved Rust tools like cargo, rustup, and crates.io are literally unusable there because of the Great Firewall. What’s worse the Chinese online Rust community is heavily fragmented because of the monopoly and constrains of certain web communicators.

The first day concluded with a workshop session. I was lucky to book a place on the introduction to embedded development in Rust which received a great deal of interest, causing a long wait-list to emerge. In my case, bare-metal programming with Rust on LPC845 board turned out to be a partial success: the LED was blinking, the state of the button remained a secret.

Sunday: Strange Surprise

Sunday started with a great talk about crowd-sourced, Rust-based train schedule information service from South Africa. What stuck to my mind was that there was no need to back up the state of the service in a database for the possibility of a crash, because the service simply never crashes and it’s only restarted every couple of months because of updates. Great advertisement for Rust! Not to mention the memory usage after switching from Kotlin to Rust was reduced by the factor of 40.

Strange as it may sound RustFest day 2 included also a live music concert! The music was generated using commands of a simple DSL typed by the presenter in the browser running a Rust+WebAssembly web page of his creation.

The Best Comes to Those Who Wait

The conference took place just after the long-awaited async/. await feature had been stabilized so of course, this hot topic was also present in the program. First, the audience learned about the basics of async-std which is, as the name suggests, asynchronous implementation of the Rust standard library, in which not a single potentially long-running operation is blocking until specifically asked to. Just after that, we received a deep insight into how the Rust compiler optimizes the async functions and makes them zero cost.

The day ended with another workshop session and this time I was implementing a 2D game starring (obviously!) a crab – the Rust mascot.

A Classic Snake Game in Rust

The goal of this project was to experiment with Rust bare-metal programming by implementing a simple game. The chosen hardware platform was STM32F4DISCOVERY. It is similar to F3 DISCOVERY from the official embedded Rust tutorial however, it has some additional features like analog-digital converter required for the snake to be controlled with a joystick. The game is displayed on a 96×64 OLED screen using the SSD1331 controller and SPI interface. In the process of development and debugging two additional versions of the game were created: one with text-based UI for running the game in a terminal and the other one using a game engine called Quicksilver. Quicksilver is capable of targeting WebAssembly, which means the same game can be played both on the microcontroller and in a web browser. Implementation of the game. Embedded HAL  hides a lot of details and allows us to interact with the hardware on a relatively high level of abstraction.
Some remarkable issues are:

* Development without OS and allocator requires #![no_std] flag but Rust’s native test framework depends on standard library facilities. To keep unit tests in the same file as the source code (which is Rust’s convention), a conditional compilation was required.

* Lack of dynamic memory allocation makes it difficult to parameterize application code with the size of “things” – everything has to be set at compile time. It is a problem when unit testing or targeting different screen sizes. The easy solution for this would be to use a feature named const generics which, unfortunately, is not yet fully implemented in Rust. Luckily there is a workaround in the form of crate generic_array, but overall Rust still has to catch up with C++ in this subject.

* Important thing one has to remember when developing with Rust is that the difference in performance between debug and release versions is huge. In the case of this game and microcontroller, the debug version was unusable.

For more information follow the link below:

Ignite 2019 Reveals new Azure Synapse

Once again Microsoft does it again, just yesterday they revealed a “brand-new limitless analytics” called Azure Synapse. The grand revelation took place at Ignite 2019. However, the real question is what does this brand new Azure Synapse do?  As Microsoft would say, “Azure Synapse brings together enterprise data warehousing and Big Data Analytics”. Nevertheless, organizations can use their data quickly, by pulling together insights from all data sources, data warehouses, and big data analytics systems.

Pros of Azure Synapse

1. Azure Synapse Analytics makes building and operating analytics solutions a simple, intuitive and no-code experience.

2. It empowers users to analyze data quickly while bringing high performance and unmatched scale.

3. Project timelines can be measured in hours not months.

4. Enabling Power BI and Azure machine learning is simpler.

5. It provides a unified workspace for data preparation, data management, data warehousing, big data, and Artificial Intelligent tasks.

Feel no Fear

However, you might feel worried about how your business will adapt to this new technology? Feel no fear, your business can continue running with its current data workloads in production today with Azure Synapse and will automatically benefit from the new capabilities. Azure Synapse is all about putting in the information a lot faster, productively and securely. Pulling together insights from all data sources, data warehouses, and big data analytics systems. Integration is a must!

What Lies for The Future

Azure Synapse is breaking boundaries, they want to make data accessible, usable and actionable to organizations, furthermore, they want to make professionals deliver that information as swiftly as possible on a large scale.

Last Words

In the world we live today, the role of IT and developers is becoming increasingly important, however, here at Thaumatec, we are up-to-date in new revelations and technologies so we can deliver them to you. After all, we make the future matter and connect to you.

Top 100 Smartest-Cities in the World: Wroclaw Ranks in #95

According to the IESE (Business School, University of Navarra) Cities in Motion Index 2019, it is reported that Wroclaw is amongst the top 100 smartest cities in the world!

We are very proud of our city’s recognition however, we are even prouder that our services and technology have made a small contribution to Wroclaw’s success. Needless to say, it wasn’t an easy task to rank Top 100 Smart-cities, there were many indicators that were considered in this rigid selection process such as; human capital, social cohesion, economy, governance, environmental, mobility and transportation, urban planning, technology, and international reach. Warsaw (#69) and Wroclaw (#95), were the only two smart-cities of Poland; London, New York, and Amsterdam ranked in the top 3 of the world according to the report.

Smart-city is quality of life and as a Thaumatician living and working in Wroclaw, we are honored to form part of such recognition. Here at Thaumatec, we make our small contribution to society by developing gateways such as Lora and/or LoraWan that have helped students from the local university to develop great ideas that have been recognized or helped as research projects for other companies. Also, our engineers have the right set of skills to develop technologies that can be used to make Wroclaw a smarter-city. After all, we have experience working at Amsterdam for the development of their Smart-City project. We proudly engage in activities that will keep on helping our lovely Wroclaw rank top 100 in the following years.

In Wrocław, most smart city projects concern transportation while taking care of the environment as well. Many creative projects throughout the years have been established such as; public bike and electric car rentals, advanced cashless payments inside trams and buses, mobile apps for payments on public transportation and more. We hope in the near future the city of Wroclaw can keep on implementing smart projects that help improve the quality of life of all its residents. Remember you can always count on Thaumatec for, delivering, creative thinking, transcending technology and what we do best; creating a future that matters and connects to all of you.

For more information in Polish you can visit this website:

https://www.wroclaw.pl/smartcity/iese-cities-in-motion-index-2019-wroclaw

Also if you would like the full IESE Cities in Motion Index 2019, visit this website:

https://media.iese.edu/research/pdfs/ST-0509-E.pdf

How IoT will change in the upcoming years?

Last week Microsoft released the “IoT Signals” report to show that it’s on the rise in the major business sectors! According to the report, the majority (94%) of companies by 2021 will be using the Internet of Things (IoT). The reports demonstrate hardcore facts that industries such as manufacturing retail/wholesale, transportation, government, and healthcare are not slowing down with the use of IoT, in fact, it’s growing even more. IoT is modifying the way people live and work.

Why Should we use IoT

“Business decision-makers, IT decision-makers, and developers at enterprise organizations are incorporating IoT at high rates, and the majority is satisfied with their experience.” said the IoT Signal reports. The enthusiasm for IoT adoption is going global and there is no way of stopping it.

USA, UK, Germany, France, China, and Japan are the top leading countries in the areas of manufacturing, retail/wholesale, transportation, government, and healthcare. According to the reports, the majority of businesses are satisfied with the results, in fact, they say they are crucial when it comes to decision making because they believe it has a strong return of investment. Many believe that in two years from now they will be able to see their return of investment at cost savings and efficiencies as well. However in a few words, the reasons why any business should adopt IoT are efficiency, productivity, optimization and improvement of employee productivity, security, manage chain, assure quality, track assets and enabling sales.

Needless to mention each company adopts different use cases according to their needs but once they have them aligned, the results are tangible.

Challenges

Like every good thing in this world, it comes along with difficulties; to name a few are complexity and technical challenges, security concerns and lack of talent/training. However, the most common one is the complexity level and technological challenges that are the #1 barrier for companies wanting to use IoT (38% of companies say these are the reasons they aren’t using IoT more). The second reason is the lack of budget & knowledge and staff resources. Lack of budget and staff resources (29%), lack of knowledge (29%), and difficulty finding the right solution (28%) are the next most common roadblocks. Lack of talent and training is the newest challenge and this is due to the fact that it’s hard to find workers with these specific skills and experience. Security is always a factor (19%), at the moment of implementing IoT, concerns elevate in the areas of firmware management, hardware/software testing and updating it. An example of this according to the IoT Signals reports “our findings show that IoT adopters believe around one-third of IoT projects fail in proof of concept (POC), often because the implementation is expensive or the bottom-line benefits are unclear.”

“Due to IoT’s complexity, an IoT strategy requires leaders to bridge organizational boundaries, communicate the strategic vision for IoT, and achieve broad alignment across all participating teams. Having a technology leader with end-to-end accountability can be critical to achieving success with IoT.” states IoT Signal reports.

What is Expected in The Future

Besides IoT’s complexity, IoT is globally known for making companies efficient, productive and safe. It is estimated in the near future that by implementing IoT in business the ROI will increase as well as becoming indispensable to any organization. The future of IoT seems to project prosperity at its best.

Yocto Fundamentals

Complex embedded and cyber-physical systems are getting more and more sophisticated, and perform increasingly complex tasks. In order to shorten their time to market, embedded Linux distributions are used as the base environment for newly developed products. While there is a big convenience in having the system set up quickly, it comes at the cost of the increased overall complexity of the solution. This means additional work for platform maintenance is required. Another level of complexity is added by the need to develop and maintain multiple product configurations that could cover systems of different scale, software and hardware architecture.

The Yocto Project addresses those issues by using layered architecture for providing support for various hardware architectures, boards, and software components. This approach not only allows to reuse existing software components but also makes them interchangeable, thus providing the building block for embedded systems developers and integrators.

Imagine a case where an embedded system manufacturer wants to build a custom product. In the traditional approach, engineers would have to prepare the bootloader, the Linux kernel and the root file system customized to match the requirements of each platform they intend to support. Moreover, application logic also needs to be integrated.

In the ideal world, the manufacturer would like to prepare one generic component that provides a solution to all specific problems. Such a component should be then reusable and reconfigurable across all base platforms of the manufacturer’s choice.

Here is where the Yocto Project brings value. With Yocto it is possible to provide an application to the integration team in a form of recipe contained in a product-specific layer. Such a layer can be added to any Yocto-based build system, usually without any additional work required to build embedded Linux system images for the target hardware platforms. This not only reduces the work required to bring up the system for the first time but also enables to switch the base platform pain-free while staying focused on the solution.

Having recipes prepared with care and the system architected as a set of independent and feature-related layers keeps the system clean and lets developers reuse software components. It is also possible to modify recipes that are provided by other layers without polluting them by using bitbake append file. The ability to use bitbake append files is a smart mechanism for modifying bitbake recipes provided by a 3rd party and keeping those modifications in a separate repository. That makes the management and maintenance of code, build configurations and build steps easy and efficient.

There is an obvious consequence of such a statement: the recipes have to be prepared with care and the system has to be architected as a set of independent and feature-related layers. It is also possible to modify recipes that are provided by other layers without polluting them by using bitbake append file.

OpenEmbedded, which is a well-known build framework for embedded Linux distributions, providing thousands of packages and Yocto base layers provide a foundation for a system being developed to be customized by other layers added on a top.

As its name already suggests, the Board Support Package (BSP) layer provides the support for a particular board or a set of boards. It is a place where configurations and customizations of the Linux kernel and bootloader should be located and any other software components that are required by the platform features. It should also contain any machine-specific tuning options necessary for the compiler.

The Distribution layer allows producing customized Linux distribution – a set of software packages that provide applications and libraries and the Linux kernel. It can be used to adjust more precisely all the low-level details like compile-time options, base files, package selection or branding. In many cases, the reference Poky distribution is enough, so this layer is optional customization.

The Application layer is a layer or a set of layers intended to provide application-specific features, most notably the application itself but also libraries, frameworks and other extra features that the developer needed to build the final solution. All those components come in generic form for portability.

An alternative to the Yocto Project is Buildroot. Both projects are able to generate fully functional embedded Linux distribution. What is the difference between them? Buildroot has a strong focus on simplicity, reuses existing, well-known technologies such as config and make. Yocto, on the other hand, requires knowledge about bitbake and performs builds in a sophisticated framework based on python and bash.

There is also a significant difference in output. The main artefact of Buildroot is a root filesystem, but also a toolchain, a kernel and a bootloader. On the other hand, the primary output of Yocto is a package feed that is then a foundation for the final root filesystem image. That means Buildroot is a better choice for single-configuration, monolithic systems with fixed package set, while with Yocto-based image one can work on package-level during development or even after the deployment. Similarly, extensible SDK (eSDK) provided by Yocto can be customized or updated. With such a development environment, developers are able to work on the application for the target system in a convenient way, without the need to rebuild the system image.

Now, not only development is easier, but also build times differ. Buildroot has no caching mechanism for reliable incremental builds in contrast to Yocto, which caches build results, so only modified packages are rebuilt. The difference is significant especially for complex systems in favour of Yocto build system. Moreover, cache files produced by one build can be shared via network and reused by multiple developers.

Configuration approach also differs between those two solutions. Buildroot stores configuration in a single file which defines system: bootloader, kernel, packages and CPU architecture. In Yocto configuration is divided into recipes and append files are provided by separate layers. The top-level configuration is controlled from a single configuration file that uses the aforementioned recipes as a building blocks.

The possibility of keeping specific configurations separate, e.g.: for distribution, machine architecture, BSP and root filesystem image allows to reuse it, e.g. build the same image for a various number of platforms. In Buildroot each platform requires separate monolithic configuration. That means a great part of it will be duplicated.

While the developers may find the Yocto Project to have a bit steeper learning curve, its well-defined project structure contributes to time savings, optimizing work, as well as most importantly it lends itself very well to the improved scalability. Both vertical – towards an increased complexity of embedded solutions, as well as horizontal – towards building approaches and platforms that support complex product families in a uniform manner. From the developer’s point of view, because Yocto allows to organize the code and the configuration in separate layers which can be stored in separate repositories, both the development as well as maintenance of projects becomes very structured and possible to organize very well. All those features make Yocto-based work environment effective, convenient, as well as well-suited to support very well all kinds of long-term developments. This reduces time to market and required effort especially in case of products with multiple configurations and base platforms.

Less talked about, but still great Rust features!

Rust is one of the most talked about languages in recent years, as the Firefox browser is gradually being rewritten to use it and more and more companies are starting to use it. While its most prominent feature – borrow checker – has been discussed numerous times, there are also many smaller features, which provide a comfortable experience while developing in Rust.

Convenient switching between toolkit versions
One of the most common issues that affect Rust programmers is that the most tempting language features are always available only in nightly. On the other hand, for professional development, it’s a good idea to keep the ecosystem quite stable, with one toolkit version that is expected to make our codebase work, and which will be supported for as long as possible. Is it possible to seamlessly switch between different releases of the Rust toolkit?

Enter rustup – the Rust toolchain installer. It’s the tool that allows not only for installing, but also updating and switching between different toolchains.

The main rustup website is rather minimalistic, but anyone in need of more information can find it in a quite extensive rustup project’s README.

Module system
Project’s source code rarely can be contained in a single source file. Different programming languages have different approaches to solving this problem – one of the most known in the embedded programming world is the C/C++ way – #include, which essentially causes the referenced file to be directly pasted in its entirety by the preprocessor.

This solution is simple to implement, easy to understand and usually works. There are pretty severe issues with this approach though: any changes made in one file can spill into all others which include it – directly or indirectly. It can be a rogue #define which changes the behaviour of the code or using a namespace directive.

There’s also a risk of including one file more than once, that would cause things to be defined twice – and every C/C++ developer has the include guard already in their muscle memory or editor templates. Rust solves the problem differently – by using a module system.

Modules in Rust can work inside of one file in a way similar to namespaces in C++, with the additional possibility of default privacy of module contents – anything that needs to be visible outside of the module needs an extra pub keyword.

But when a module itself is prepended with a pub, then it can be used in other source files, while being completely independent of them. More information can be found in The module System to Control Scope and Privacy chapter of the Rust Book.

Crates
Almost all software developed stands today on the shoulders of giants by using at least one third party library. Most modern programming languages provide an easy way for the code to depend on a given version third-party library, which can be automatically downloaded and built by the build system. Rust isn’t different in this matter – cargo, build system used by Rust, gets the current project information from Cargo.toml file, where dependencies on other libraries (called crates) and their versions can be also stored. Then, during the build process, crates are downloaded and built.

Crates can be browsed using Rust Package Registry, which also conveniently provides a snippet that can be pasted into the Cargo.toml to use the chosen crate.

As an interesting side note, cargo and crates are both references to Cargo Cult. Knowing current development practices of stitching libraries together until it works, it’s a pretty humorous touch.

Integrated testing
Software testing is one of the basic elements of modern code hygiene. There are testing frameworks for almost all programming languages, but Rust decided to be a step ahead and a simple test framework is provided with the language itself. Citing a simple example that can be found in the Writing tests chapter of the Rust Book:

#[cfg(test)]
mod tests {
#[test]
fn it_works() {
assert_eq!(2 + 2, 4);
}
}

In this example, module tests will be available only during the test execution, and the it_works function is marked as a test, therefore it will be run when the cargo test command is invoked.

But that’s not all – it’s possible not only to test the correctness of our program, but also to benchmark it. To do that, it’s needed to create a function marked as bench, taking a Bencher object, which will do the measurement. Thanks to that initialization of objects can be done outside of the measured code fragment. At the time of writing this text, benchmarking is an unstable feature and needs to be enabled using the test flag. More information can be found in the test library features in the Unstable book.

Documentation tests
Rust also provides rustdoc, which is an automatic documentation generation system, similar to Doxygen or Sphinx. But one of the most common problems when documenting the rapidly moving codebases is to keep the documentation up to date. Rust tackles the issue by allowing one to include code samples with assertions, which can be easily compiled and evaluated. More information can be found in the rustdoc documentation.

Summary
Rust provides not only a lot of nice programming constructs, but also a really comfortable ecosystem that aims at solving practical problems that programmers have in their daily jobs; therefore, it shapes up to be not only an academic-only curiosity, but a tool that could be used in long running commercial projects. If you haven’t played with it, then try it out – Rust is not only a useful tool, but it’s also lots of fun.

Copyrights © Thaumatec 2025