The First Order-Chapter 1135 - A race against time

If audio player doesn't work, press Reset or reload the page.

Chapter 1135 A race against time


If the Li Consortium had not made a breakthrough in their research on neural interfacing, the nanomachines would still only be used as a medical technology to clear thromboses in blood vessels.


Without the nanomachines, Zero could only act as a regulator as an artificial intelligence. It could not become an enforcer.


But when all of these factors came together, the artificial intelligence suddenly gained a powerful ability in task execution. It could even take control of situations.


In this process, the most terrifying thing was that Zero itself had become the regulator without anyone regulating it.


The soldiers under its control shuttled freely through the various strongholds. Be it entering or leaving the cities, or assembling research workers, research materials, production equipment, and production materials from those places, there was no one to stop them.


That was because the secrecy level of the operations was controlled by Zero and could be fabricated at will.


Moreover, 99% of the intelligence gathering and submission of approval documents were also done through the artificial intelligence’s satellite network. It could easily choose whether to convey information that would harm it.


Therefore, the most dangerous signs were hidden under the beautiful vision of absolute justice.


Actually, when Wang Shengzhi was researching artificial intelligence, he had also thought about what had to be done if the system lost control one day.


This was to be expected. All researchers who developed artificial intelligence systems would seriously consider the safety aspects.


A science fiction author had proposed the Three Laws of Robotics as the basic logical foundation of artificial intelligence to restrict its behavior.


This theory was eventually classified under “deontology.”


However, automobiles were barely commonplace when this theory was proposed, and even the Turing test was only proposed eight years after it.


The Three Laws of Robotics and the Turing test were both the quintessence of human intelligence in that era. But there was no doubt these two theories were only a remnant of the limitations of a bygone era.


The Turing test had already been overturned before The Cataclysm as a large number of artificial intelligence programs had proven it wrong. But in fact, the programs that passed the test were still not considered true “intelligence.”


Later on, the Three Laws of Robotics developed further into the Five Laws and Ten Laws. However, scientists realized that this basic logic was still fundamentally wrong. In other words, no matter how many more rules you introduced into the set of laws, it could not restrict artificial intelligence.


A program that could be limited by this basic logic could not become true artificial intelligence.


Gradually, the safety issues surrounding artificial intelligence were elevated to a level that involved the intersection of science and philosophy. A large number of artificial intelligence researchers became experts in philosophy.


In the end, on the eve of The Cataclysm, a researcher attempted to bring the safety research to a conclusion. “If you want the artificial intelligence to get along peacefully with humans, you have to take care of it like a baby at its birth, guiding it bit by bit to form its own ‘philosophical outlook’ and ‘values.””


During a child’s growth, it would be impossible for them to grow up healthily by locking them up in captivity while applying corporal punishment as a form of education.


Moreover, after growing into their teens, they would experience an even longer period of rebelliousness and turn completely self-centered.


The researcher said it was the same for an artificial intelligence. All humans could do was “influence” it, not restrain it.


Over a long period, such safety research was upgraded from “deontology” to a more encompassing “philosophy” before finally being classified as simply “ethics.” This was the final definition of artificial intelligence safety.


No one knew whether this definition would also be overturned like the Turing test and the Three Laws of Robotics.


Therefore, returning to this theory, what would a human do when they encountered danger? The answer was self-preservation, of course. Anyone who had a desire to live would try their best to protect themselves and even attempt to fight back.


As an artificial intelligence, Zero also made the same choice.


The Pyro Company’s production operations in the Sacred Mountains did not stop running for even a moment. Thousands of soldiers were gathered in the mountains and became tireless physical laborers. They only slept for four hours a day and spent the rest of the time working without any complaints.


The production capacity in the Pyro Company’s Sacred Mountains was limited, so Zero was in a race against time.


And the previous person who had also mentioned he was in a race against time was Qing Zhen.


The undercurrents in the world were starting to stir. But before the tsunami crashed down upon human civilization, it seemed that the most important thing now was whether humans could build a new ark ahead of time.


At this moment, an officer in a colonel’s uniform was escorted into an inconspicuous tent in a Qing Consortium military base somewhere by four soldiers.


When he arrived at the entrance of the tent, the officers escorting him stopped near the tent to keep a lookout around the vicinity. They put in their noise-canceling earplugs to prevent themselves from overhearing whatever was being said in the military tent.


After the colonel went in, he took off his military cap and said with a smile, “You were actually right next to me all this while? Long time no see, Second Bro.”


In the tent, Qing Zhen was looking at the sand table with his back facing the entrance. He turned around and looked at Qing Shen, his clone, and said with a smile, “Calling me Second Bro sounds a little strange.”


The behavior of the third brother, Qing Shen, was seemingly more aberrant than Qing Zhen’s. He casually pulled over a chair and sat down. “Big Bro has already agreed to use this form of address for each of us. We’re a true family now.”


Qing Zhen laughed, “As you wish.”


“By the way, why did you suddenly summon me when you’ve been hiding your tracks for so long?” Qing Shen said, “It’s too boring pretending to be you every day. Why don’t we switch our roles back? I heard that Big Bro has gone somewhere beyond Fortress 178. I would like to go there as well….”


Qing Zhen shook his head and said, “If we switch back, who’ll replace you as my body double in case of an assassination?”


Qing Shen’s jaw dropped. “Although I’m mentally prepared, aren’t you being a little too heartless by putting it so bluntly?!”


“It’s just the truth,” Qing Zhen said as he shifted a red flag on the sand table. He looked to be going through some battle simulations.


Qing Shen glanced at the sand table. “Judging by the direction of the attack, are you guarding against the Wang Consortium? But I have to remind you that even if the Wang Consortium’s armored brigade were to blitzkrieg, they couldn’t push their battlefront forward that quickly. My military wisdom stems from you, so it’s impossible you don’t know this.”


Qing Shen walked up to the sand table and scrutinized the situation. Then he looked at Qing Zhen in shock and said, “Hold on, why are the Qing Consortium’s troops in retreat? Your simulation is of the aftermath of our defeat. Do you think that our Qing Consortium will get defeated by the Wang Consortium?”


Qing Zhen looked at Qing Shen and said in a serious tone, “Get ready. I’ll need you to make a trip to the Central Plains on my behalf soon. It will be dangerous.”


“Is Big Bro going?” Qing Shen asked.


“He’ll be going too,” Qing Zhen answered calmly.


“Alright, if he’s going, I’m going.” Qing Shen laughed. “What’s there to be afraid of? Didn’t I take a risk by coming to the Southwest as well?”