"Edge computing receives a significant boost from a new optical device developed by researchers from the Tokyo University of Science, capable of real-time signal processing across various timescales. This device, demonstrating high classification accuracy on the MNIST dataset, represents a leap forward in efficient and cost-effective computing at the edge, offering a promising solution for applications requiring rapid data processing and analysis". (ScitechDaily, Revolutionizing Real-Time Data Processing: The Dawn of Edge AI)
The problem with traditional systems is that they use a fixed time scale. The new microchip can use signal processing in various time scales. And if the processor uses multilayer multicore architecture. That thing makes it possible for the processor can use different time scales at the same time. Finding the right answer from the data, stored in the databases.
The new AI systems have a combination of photonic and electric processors. That thing makes those systems more powerful than today. And they can analyze mission-critical data at extremely high speed. The double-speed system can analyze data in two stages. The first and faster data analysis happens in photonic processors. And behind them, data travels in the photonic system. That thing allows the system to compare the results that it gets.
In traditional systems, there are two main paths for the data-handling process. The problem is that those systems use only the electric components. The thing in error correlation is in error detection. And if the system uses a stage model with different types of processors, that makes error detection easier. The same errors that affect photonic systems do not affect electric systems.
In the traditional system, the transmitter sends a copy of the data pack to error detectors That it sends to the receiver. Then the receiving system sends a copy of the received data package to error-detection servers. And that server can see if there are differences. If there are no differences between those data packs. That server gives a promision to begin the operation.
In the new systems, the data that travels in the different routes using different systems the server sees that if there are no errors the action is authorized.
There are systems. That collects data from different input devices. And then there are the systems whose mission is to involve data storage. The idea is that when the system sees something that compares data from sensors with data stored in memories. And then, the system will make a decision.
The problem with this type of system is that time-critical systems must react very fast the system must make an error detection. And then start the right reaction during that reaction because there is no time for long-term analysis.
Or otherwise, we can say in that case the decision is that. When the system finds a match with data that comes from the sensor and data stored in some database, that thing connects the reaction to the observation. So the system has a description of things that it should react to, and incoming data matches that description. That thing starts a reaction.
But when the system requires a fast reaction it must multiply the data at the same time the system sends that data to all servers because that makes sure that the system gets data very fast. But the problem is that if two servers interpret that data applies to them. That thing can cause conflict, and for that thing, the system must have a hierarchy in those servers.
https://scitechdaily.com/revolutionizing-real-time-data-processing-the-dawn-of-edge-ai/
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.