RustFest

RustFest Conference in  Barcelona came to an end, so here is a summary of it.

Saturday

Saturday started with a keynote about the benefits and pitfalls of Rust adoption. The economic gain from the safety Rust offers is now beyond question but some new potential problems are just around the corner. One of the most interesting challenges discussed was what will happen and how to react when the language becomes mainstream and it won’t be used just by a group of loyal enthusiasts but also by unaware developers who happened to join a project based on this sometimes very demanding and difficult language.

A big surprise was to hear about the process of Rust adoption in China which sadly isn’t going well. All the beloved Rust tools like cargo, rustup, and crates.io are literally unusable there because of the Great Firewall. What’s worse the Chinese online Rust community is heavily fragmented because of the monopoly and constrains of certain web communicators.

The first day concluded with a workshop session. I was lucky to book a place on the introduction to embedded development in Rust which received a great deal of interest, causing a long wait-list to emerge. In my case, bare-metal programming with Rust on LPC845 board turned out to be a partial success: the LED was blinking, the state of the button remained a secret.

Sunday: Strange Surprise

Sunday started with a great talk about crowd-sourced, Rust-based train schedule information service from South Africa. What stuck to my mind was that there was no need to back up the state of the service in a database for the possibility of a crash, because the service simply never crashes and it’s only restarted every couple of months because of updates. Great advertisement for Rust! Not to mention the memory usage after switching from Kotlin to Rust was reduced by the factor of 40.

Strange as it may sound RustFest day 2 included also a live music concert! The music was generated using commands of a simple DSL typed by the presenter in the browser running a Rust+WebAssembly web page of his creation.

The Best Comes to Those Who Wait

The conference took place just after the long-awaited async/. await feature had been stabilized so of course, this hot topic was also present in the program. First, the audience learned about the basics of async-std which is, as the name suggests, asynchronous implementation of the Rust standard library, in which not a single potentially long-running operation is blocking until specifically asked to. Just after that, we received a deep insight into how the Rust compiler optimizes the async functions and makes them zero cost.

The day ended with another workshop session and this time I was implementing a 2D game starring (obviously!) a crab – the Rust mascot.

A Classic Snake Game in Rust

The goal of this project was to experiment with Rust bare-metal programming by implementing a simple game. The chosen hardware platform was STM32F4DISCOVERY. It is similar to F3 DISCOVERY from the official embedded Rust tutorial however, it has some additional features like analog-digital converter required for the snake to be controlled with a joystick. The game is displayed on a 96×64 OLED screen using the SSD1331 controller and SPI interface. In the process of development and debugging two additional versions of the game were created: one with text-based UI for running the game in a terminal and the other one using a game engine called Quicksilver. Quicksilver is capable of targeting WebAssembly, which means the same game can be played both on the microcontroller and in a web browser. Implementation of the game. Embedded HAL  hides a lot of details and allows us to interact with the hardware on a relatively high level of abstraction.
Some remarkable issues are:

* Development without OS and allocator requires #![no_std] flag but Rust’s native test framework depends on standard library facilities. To keep unit tests in the same file as the source code (which is Rust’s convention), a conditional compilation was required.

* Lack of dynamic memory allocation makes it difficult to parameterize application code with the size of “things” – everything has to be set at compile time. It is a problem when unit testing or targeting different screen sizes. The easy solution for this would be to use a feature named const generics which, unfortunately, is not yet fully implemented in Rust. Luckily there is a workaround in the form of crate generic_array, but overall Rust still has to catch up with C++ in this subject.

* Important thing one has to remember when developing with Rust is that the difference in performance between debug and release versions is huge. In the case of this game and microcontroller, the debug version was unusable.

For more information follow the link below:

Ignite 2019 Reveals new Azure Synapse

Once again Microsoft does it again, just yesterday they revealed a “brand-new limitless analytics” called Azure Synapse. The grand revelation took place at Ignite 2019. However, the real question is what does this brand new Azure Synapse do?  As Microsoft would say, “Azure Synapse brings together enterprise data warehousing and Big Data Analytics”. Nevertheless, organizations can use their data quickly, by pulling together insights from all data sources, data warehouses, and big data analytics systems.

Pros of Azure Synapse

1. Azure Synapse Analytics makes building and operating analytics solutions a simple, intuitive and no-code experience.

2. It empowers users to analyze data quickly while bringing high performance and unmatched scale.

3. Project timelines can be measured in hours not months.

4. Enabling Power BI and Azure machine learning is simpler.

5. It provides a unified workspace for data preparation, data management, data warehousing, big data, and Artificial Intelligent tasks.

Feel no Fear

However, you might feel worried about how your business will adapt to this new technology? Feel no fear, your business can continue running with its current data workloads in production today with Azure Synapse and will automatically benefit from the new capabilities. Azure Synapse is all about putting in the information a lot faster, productively and securely. Pulling together insights from all data sources, data warehouses, and big data analytics systems. Integration is a must!

What Lies for The Future

Azure Synapse is breaking boundaries, they want to make data accessible, usable and actionable to organizations, furthermore, they want to make professionals deliver that information as swiftly as possible on a large scale.

Last Words

In the world we live today, the role of IT and developers is becoming increasingly important, however, here at Thaumatec, we are up-to-date in new revelations and technologies so we can deliver them to you. After all, we make the future matter and connect to you.

Top 100 Smartest-Cities in the World: Wroclaw Ranks in #95

According to the IESE (Business School, University of Navarra) Cities in Motion Index 2019, it is reported that Wroclaw is amongst the top 100 smartest cities in the world!

We are very proud of our city’s recognition however, we are even prouder that our services and technology have made a small contribution to Wroclaw’s success. Needless to say, it wasn’t an easy task to rank Top 100 Smart-cities, there were many indicators that were considered in this rigid selection process such as; human capital, social cohesion, economy, governance, environmental, mobility and transportation, urban planning, technology, and international reach. Warsaw (#69) and Wroclaw (#95), were the only two smart-cities of Poland; London, New York, and Amsterdam ranked in the top 3 of the world according to the report.

Smart-city is quality of life and as a Thaumatician living and working in Wroclaw, we are honored to form part of such recognition. Here at Thaumatec, we make our small contribution to society by developing gateways such as Lora and/or LoraWan that have helped students from the local university to develop great ideas that have been recognized or helped as research projects for other companies. Also, our engineers have the right set of skills to develop technologies that can be used to make Wroclaw a smarter-city. After all, we have experience working at Amsterdam for the development of their Smart-City project. We proudly engage in activities that will keep on helping our lovely Wroclaw rank top 100 in the following years.

In Wrocław, most smart city projects concern transportation while taking care of the environment as well. Many creative projects throughout the years have been established such as; public bike and electric car rentals, advanced cashless payments inside trams and buses, mobile apps for payments on public transportation and more. We hope in the near future the city of Wroclaw can keep on implementing smart projects that help improve the quality of life of all its residents. Remember you can always count on Thaumatec for, delivering, creative thinking, transcending technology and what we do best; creating a future that matters and connects to all of you.

For more information in Polish you can visit this website:

https://www.wroclaw.pl/smartcity/iese-cities-in-motion-index-2019-wroclaw

Also if you would like the full IESE Cities in Motion Index 2019, visit this website:

https://media.iese.edu/research/pdfs/ST-0509-E.pdf

How IoT will change in the upcoming years?

Last week Microsoft released the “IoT Signals” report to show that it’s on the rise in the major business sectors! According to the report, the majority (94%) of companies by 2021 will be using the Internet of Things (IoT). The reports demonstrate hardcore facts that industries such as manufacturing retail/wholesale, transportation, government, and healthcare are not slowing down with the use of IoT, in fact, it’s growing even more. IoT is modifying the way people live and work.

Why Should we use IoT

“Business decision-makers, IT decision-makers, and developers at enterprise organizations are incorporating IoT at high rates, and the majority is satisfied with their experience.” said the IoT Signal reports. The enthusiasm for IoT adoption is going global and there is no way of stopping it.

USA, UK, Germany, France, China, and Japan are the top leading countries in the areas of manufacturing, retail/wholesale, transportation, government, and healthcare. According to the reports, the majority of businesses are satisfied with the results, in fact, they say they are crucial when it comes to decision making because they believe it has a strong return of investment. Many believe that in two years from now they will be able to see their return of investment at cost savings and efficiencies as well. However in a few words, the reasons why any business should adopt IoT are efficiency, productivity, optimization and improvement of employee productivity, security, manage chain, assure quality, track assets and enabling sales.

Needless to mention each company adopts different use cases according to their needs but once they have them aligned, the results are tangible.

Challenges

Like every good thing in this world, it comes along with difficulties; to name a few are complexity and technical challenges, security concerns and lack of talent/training. However, the most common one is the complexity level and technological challenges that are the #1 barrier for companies wanting to use IoT (38% of companies say these are the reasons they aren’t using IoT more). The second reason is the lack of budget & knowledge and staff resources. Lack of budget and staff resources (29%), lack of knowledge (29%), and difficulty finding the right solution (28%) are the next most common roadblocks. Lack of talent and training is the newest challenge and this is due to the fact that it’s hard to find workers with these specific skills and experience. Security is always a factor (19%), at the moment of implementing IoT, concerns elevate in the areas of firmware management, hardware/software testing and updating it. An example of this according to the IoT Signals reports “our findings show that IoT adopters believe around one-third of IoT projects fail in proof of concept (POC), often because the implementation is expensive or the bottom-line benefits are unclear.”

“Due to IoT’s complexity, an IoT strategy requires leaders to bridge organizational boundaries, communicate the strategic vision for IoT, and achieve broad alignment across all participating teams. Having a technology leader with end-to-end accountability can be critical to achieving success with IoT.” states IoT Signal reports.

What is Expected in The Future

Besides IoT’s complexity, IoT is globally known for making companies efficient, productive and safe. It is estimated in the near future that by implementing IoT in business the ROI will increase as well as becoming indispensable to any organization. The future of IoT seems to project prosperity at its best.

Yocto Fundamentals

Complex embedded and cyber-physical systems are getting more and more sophisticated, and perform increasingly complex tasks. In order to shorten their time to market, embedded Linux distributions are used as the base environment for newly developed products. While there is a big convenience in having the system set up quickly, it comes at the cost of the increased overall complexity of the solution. This means additional work for platform maintenance is required. Another level of complexity is added by the need to develop and maintain multiple product configurations that could cover systems of different scale, software and hardware architecture.

The Yocto Project addresses those issues by using layered architecture for providing support for various hardware architectures, boards, and software components. This approach not only allows to reuse existing software components but also makes them interchangeable, thus providing the building block for embedded systems developers and integrators.

Imagine a case where an embedded system manufacturer wants to build a custom product. In the traditional approach, engineers would have to prepare the bootloader, the Linux kernel and the root file system customized to match the requirements of each platform they intend to support. Moreover, application logic also needs to be integrated.

In the ideal world, the manufacturer would like to prepare one generic component that provides a solution to all specific problems. Such a component should be then reusable and reconfigurable across all base platforms of the manufacturer’s choice.

Here is where the Yocto Project brings value. With Yocto it is possible to provide an application to the integration team in a form of recipe contained in a product-specific layer. Such a layer can be added to any Yocto-based build system, usually without any additional work required to build embedded Linux system images for the target hardware platforms. This not only reduces the work required to bring up the system for the first time but also enables to switch the base platform pain-free while staying focused on the solution.

Having recipes prepared with care and the system architected as a set of independent and feature-related layers keeps the system clean and lets developers reuse software components. It is also possible to modify recipes that are provided by other layers without polluting them by using bitbake append file. The ability to use bitbake append files is a smart mechanism for modifying bitbake recipes provided by a 3rd party and keeping those modifications in a separate repository. That makes the management and maintenance of code, build configurations and build steps easy and efficient.

There is an obvious consequence of such a statement: the recipes have to be prepared with care and the system has to be architected as a set of independent and feature-related layers. It is also possible to modify recipes that are provided by other layers without polluting them by using bitbake append file.

OpenEmbedded, which is a well-known build framework for embedded Linux distributions, providing thousands of packages and Yocto base layers provide a foundation for a system being developed to be customized by other layers added on a top.

As its name already suggests, the Board Support Package (BSP) layer provides the support for a particular board or a set of boards. It is a place where configurations and customizations of the Linux kernel and bootloader should be located and any other software components that are required by the platform features. It should also contain any machine-specific tuning options necessary for the compiler.

The Distribution layer allows producing customized Linux distribution – a set of software packages that provide applications and libraries and the Linux kernel. It can be used to adjust more precisely all the low-level details like compile-time options, base files, package selection or branding. In many cases, the reference Poky distribution is enough, so this layer is optional customization.

The Application layer is a layer or a set of layers intended to provide application-specific features, most notably the application itself but also libraries, frameworks and other extra features that the developer needed to build the final solution. All those components come in generic form for portability.

An alternative to the Yocto Project is Buildroot. Both projects are able to generate fully functional embedded Linux distribution. What is the difference between them? Buildroot has a strong focus on simplicity, reuses existing, well-known technologies such as config and make. Yocto, on the other hand, requires knowledge about bitbake and performs builds in a sophisticated framework based on python and bash.

There is also a significant difference in output. The main artefact of Buildroot is a root filesystem, but also a toolchain, a kernel and a bootloader. On the other hand, the primary output of Yocto is a package feed that is then a foundation for the final root filesystem image. That means Buildroot is a better choice for single-configuration, monolithic systems with fixed package set, while with Yocto-based image one can work on package-level during development or even after the deployment. Similarly, extensible SDK (eSDK) provided by Yocto can be customized or updated. With such a development environment, developers are able to work on the application for the target system in a convenient way, without the need to rebuild the system image.

Now, not only development is easier, but also build times differ. Buildroot has no caching mechanism for reliable incremental builds in contrast to Yocto, which caches build results, so only modified packages are rebuilt. The difference is significant especially for complex systems in favour of Yocto build system. Moreover, cache files produced by one build can be shared via network and reused by multiple developers.

Configuration approach also differs between those two solutions. Buildroot stores configuration in a single file which defines system: bootloader, kernel, packages and CPU architecture. In Yocto configuration is divided into recipes and append files are provided by separate layers. The top-level configuration is controlled from a single configuration file that uses the aforementioned recipes as a building blocks.

The possibility of keeping specific configurations separate, e.g.: for distribution, machine architecture, BSP and root filesystem image allows to reuse it, e.g. build the same image for a various number of platforms. In Buildroot each platform requires separate monolithic configuration. That means a great part of it will be duplicated.

While the developers may find the Yocto Project to have a bit steeper learning curve, its well-defined project structure contributes to time savings, optimizing work, as well as most importantly it lends itself very well to the improved scalability. Both vertical – towards an increased complexity of embedded solutions, as well as horizontal – towards building approaches and platforms that support complex product families in a uniform manner. From the developer’s point of view, because Yocto allows to organize the code and the configuration in separate layers which can be stored in separate repositories, both the development as well as maintenance of projects becomes very structured and possible to organize very well. All those features make Yocto-based work environment effective, convenient, as well as well-suited to support very well all kinds of long-term developments. This reduces time to market and required effort especially in case of products with multiple configurations and base platforms.

Less talked about, but still great Rust features!

Rust is one of the most talked about languages in recent years, as the Firefox browser is gradually being rewritten to use it and more and more companies are starting to use it. While its most prominent feature – borrow checker – has been discussed numerous times, there are also many smaller features, which provide a comfortable experience while developing in Rust.

Convenient switching between toolkit versions
One of the most common issues that affect Rust programmers is that the most tempting language features are always available only in nightly. On the other hand, for professional development, it’s a good idea to keep the ecosystem quite stable, with one toolkit version that is expected to make our codebase work, and which will be supported for as long as possible. Is it possible to seamlessly switch between different releases of the Rust toolkit?

Enter rustup – the Rust toolchain installer. It’s the tool that allows not only for installing, but also updating and switching between different toolchains.

The main rustup website is rather minimalistic, but anyone in need of more information can find it in a quite extensive rustup project’s README.

Module system
Project’s source code rarely can be contained in a single source file. Different programming languages have different approaches to solving this problem – one of the most known in the embedded programming world is the C/C++ way – #include, which essentially causes the referenced file to be directly pasted in its entirety by the preprocessor.

This solution is simple to implement, easy to understand and usually works. There are pretty severe issues with this approach though: any changes made in one file can spill into all others which include it – directly or indirectly. It can be a rogue #define which changes the behaviour of the code or using a namespace directive.

There’s also a risk of including one file more than once, that would cause things to be defined twice – and every C/C++ developer has the include guard already in their muscle memory or editor templates. Rust solves the problem differently – by using a module system.

Modules in Rust can work inside of one file in a way similar to namespaces in C++, with the additional possibility of default privacy of module contents – anything that needs to be visible outside of the module needs an extra pub keyword.

But when a module itself is prepended with a pub, then it can be used in other source files, while being completely independent of them. More information can be found in The module System to Control Scope and Privacy chapter of the Rust Book.

Crates
Almost all software developed stands today on the shoulders of giants by using at least one third party library. Most modern programming languages provide an easy way for the code to depend on a given version third-party library, which can be automatically downloaded and built by the build system. Rust isn’t different in this matter – cargo, build system used by Rust, gets the current project information from Cargo.toml file, where dependencies on other libraries (called crates) and their versions can be also stored. Then, during the build process, crates are downloaded and built.

Crates can be browsed using Rust Package Registry, which also conveniently provides a snippet that can be pasted into the Cargo.toml to use the chosen crate.

As an interesting side note, cargo and crates are both references to Cargo Cult. Knowing current development practices of stitching libraries together until it works, it’s a pretty humorous touch.

Integrated testing
Software testing is one of the basic elements of modern code hygiene. There are testing frameworks for almost all programming languages, but Rust decided to be a step ahead and a simple test framework is provided with the language itself. Citing a simple example that can be found in the Writing tests chapter of the Rust Book:

#[cfg(test)]
mod tests {
#[test]
fn it_works() {
assert_eq!(2 + 2, 4);
}
}

In this example, module tests will be available only during the test execution, and the it_works function is marked as a test, therefore it will be run when the cargo test command is invoked.

But that’s not all – it’s possible not only to test the correctness of our program, but also to benchmark it. To do that, it’s needed to create a function marked as bench, taking a Bencher object, which will do the measurement. Thanks to that initialization of objects can be done outside of the measured code fragment. At the time of writing this text, benchmarking is an unstable feature and needs to be enabled using the test flag. More information can be found in the test library features in the Unstable book.

Documentation tests
Rust also provides rustdoc, which is an automatic documentation generation system, similar to Doxygen or Sphinx. But one of the most common problems when documenting the rapidly moving codebases is to keep the documentation up to date. Rust tackles the issue by allowing one to include code samples with assertions, which can be easily compiled and evaluated. More information can be found in the rustdoc documentation.

Summary
Rust provides not only a lot of nice programming constructs, but also a really comfortable ecosystem that aims at solving practical problems that programmers have in their daily jobs; therefore, it shapes up to be not only an academic-only curiosity, but a tool that could be used in long running commercial projects. If you haven’t played with it, then try it out – Rust is not only a useful tool, but it’s also lots of fun.

Meet Thaumatec during Cloudfest in Germany!

Finally, Microsoft Cloudfest at Rust is just around the corner, meaning its next week and Thaumatec for sure will be attending such event. For a third year Cloudfest, takes place in different location but same objective; have fun while solving real life problems. Thaumatec in the other hand is excited in building further our expertise in the IoT and embedded capabilities and extent it with MS Azure IoT and cloud application modernization. After all there exists a broad market in those areas and great advances that can simplify many day to day tasks and few know about it, luckily Thaumatec can help you.

Cloudfest is the place to be next week, especially because Thaumatec will be there learning new ways to improve what we already know so well, embedded and IoT skills. However, our services on Azure IoT and cloud modernization are for data driven companies who embrace digital transformation; meaning everything is controlled by a cloud, monitored and even automated. Automation is just one of the qualities that Azure IoT and cloud modernization have, this also includes connectivity, 2-way communication and everything being handled from one place, the cloud.

Thaumatec’s visit to Cloudfest will help us comprehend more complex problems about Azure IoT and cloud modernization and will also give us a broader perspective about other areas such as universal acceptance, manage services and controlling data, enabling Big-data, machine learning and many more. Come and join the fun but most of all, let us talk about posibilities of MS Azure!

What’s new in The Things Network? What we saw during TTN Conference?

The Things Network is a unique in its kind, huge and publicly available global LoRaWAN™ network, connecting over six thousand gateways in more than 130 countries. Because LoRa technology is tightly coupled with the Thaumatec’s DNA, we didn’t hesitate a minute to save our spots at the annual event – The Things Conference – that took place in Amsterdam, Netherlands at the turn of January and February 2019. It’s been two long days, full of workshops and presentations explaining new trends and directions of the in LoRa industry.

We arrived at the venue in the very morning and quickly confirmed our assumptions regarding the activeness of the community. Walking by the conference booths we had a lot of great conversations with the representatives of leaders in the industry and just after grabbing a cup of coffee and making our way to the main stage we stumbled upon giant wall presenting multitude of recent, LoRa related hardware developments, being it new sensors, gateways, antennas or microchips. It took a significant amount of time event to just briefly review all the gadgets.

The main stage was where the most interesting stuff have happened, and there were indeed a lot of engaging announcements to be made by the event hosts and invited guests. The first thing that is important to notice is the launch of “The Generic Node” device, which is an answer to the problem of high TCO of the LoRa based systems. You no longer need to design, develop and fabricate a specific device to server your project needs. You can quickly deploy the generic device and focus on prototyping your solution. The Generic Node can be easily connected to the TTN network and provide you with telemetry data from many internal sensors, such as light, accelerometer, temperature, touch, humidity, etc. Want to use the LoRa technology without hardware design – The Generic Node is for you.

It is also worth noting that The Generic Node takes advantage of the other piece of engineering art that was presented at the conference, the ATECC609A microchip that provides hardware based security to the LoRa transmission layer. Introduction of the chip makes the device provisioning secure and makes it impossible to tamper with the embedded cryptographic keys.

Source: https://www.electronicdesign.com/iot/teaming-secure-lorawan

But the ultimate announcement was still there to be made. The founders of The Thing Networks decided that the LoRaWAN stack should become accessible for everyone to use. Therefore, just during the presentation on the main stage they made the GitHub repository of the V3 software stack public, giving everybody access to the implementation of the network stack that strictly follows the LoRaWAN Network Reference Model. What a news!

In between the engaging presentations we had a chance to participate in several workshops. One of them was related to the Azure IoT Hub and the ways one can utilize it to quickly develop IoT applications in the cloud. During the hands-on workshop specialists from Microsoft explained how you can connect your IoT device to the Azure Cloud in literally less than 30 minutes and that even included flashing the device with appropriate software. Once your telemetry data land in the cloud you can use straightforward drag&drop mechanisms to quickly prototype your solution.

Finally, at the end of the second day just before leaving the conference, we were once again nicely surprised. Everyone who attended the event was given a brand new LoRa gateway, whose launch was also announced during the conference. This is the fully functional, indoor LoRa gateway which is going to be available for purchase at the disruptively low price. It’s amazing that you no longer need to spend a lot of money on a gateway nor you need to have professional hardware development skills to start your adventure with LoRa communication.

All the information above is just a small excerpt of the entire content available during The Things Conference. If you plan to extend your LoRa related knowledge or you are just going to get acquainted with the technology we definitely recommend visiting the conference next year.

Our thoughts on ECS 2018

At the beginning of November, we visited the Embedded Conference Scandinavia that took place in Stockholm, Sweden. It has been two days very well spent on listening to fascinating lectures and chatting with the endless amount of professionals that came there to present their most recent activities in the field of the embedded world.

This year was even more interesting since we decided to join the noble group of presenters and share our thoughts and experiences with the LwM2M protocol in the IoT world. If you follow our blog you could definitely see that LwM2M is the next big thing in the area of device management.

The exhibition hall has been crowded since the very morning, confirming the forecast that the IoT industry develops very promptly these days. While wandering between the exhibition booths we were amazed by the amount of thought and ideas that have been put into all kind of different products presented there. Sensors, displays, antennas, cameras, drones, industrial automation devices… you name it. Taking into consideration that at each booth you could meet with the helpful and engaged staff we found it hard to find some time to participate in the lectures.

Speaking of lectures – this year the ECS team organized more than fifty presentations covering a broad variety of topics, including wireless communication, security, quality, etc. But the conference was not only about gathering the theoretical knowledge. If you prefer to make your hands dirty, you could join one of the six workshops and get acquainted with new technologies the practical way.

Spending two days surrounded by the novelties in the embedded business is very exciting. You have a chance to grow your contact network very rapidly and get back in touch with the people you met during the previous editions of the conference. We were also pleasantly surprised that some of our customers have also attended this year’s edition of the gathering.

You can clearly see that Stockholm, Sweden is on the bleeding-edge with regards to innovations in the embedded world and we cannot wait to visit the ECS next year!

Copyrights © Thaumatec 2025