Is the Universe natural, artificial or simulated?

in universe •  8 years ago 

This post is inspired by the following culprits:

http://www.bbc.com/earth/story/20160901-we-might-live-in-a-computer-program-but-it-may-not-matter
https://arxiv.org/abs/1210.1847
https://en.wikipedia.org/wiki/Simulation_hypothesis
http://www.nature.com/news/simulations-back-up-theory-that-universe-is-a-hologram-1.14328

These two links are also important.

https://en.wikipedia.org/wiki/Turing_machine
https://en.wikipedia.org/wiki/Hypercomputation

So, onto the problem. Based on calculations published in New Scientist in the 1980s, it would require an energy density of 2.1x10^22 J/m^3 to inflate a new universe. Now, remembering that E=MC^2, and that C is roughly 3x10^8 m/s (so C^2 is roughly 9x10^16), we can calculate how much mass would be required to do this. It is very roughly 4.3x10^5 kg of mass converted into pure energy and compressed into a metre cubed.

It's obviously a huge technical challenge but we're talking realistic amounts of mass. At the efficiency we're capable of, the numbers become obscene. The only way to do it would be to force the largest hydrogen bomb ever built to implode, compressing all of its energy into one cubic centimetre. It isn't possible at our level of technology. Even the LHC can't manage more than 13 TeV (3.62x10^6 J), one heptillionth the energy required. You can see why nobody was too worried about the LHC destroying the world.

The technology used for accelerators has pretty much hit its limit. I'm going to guess you'd need an accelerator ring running between Sol and Alpha Centauri before you could reasonably get close. Not outside the potential of a sufficiently advanced civilization, so we must assume that somewhere in the universe, perhaps multiverse, someone has done this, although our universe is not necessarily a product of such technology.

Now, onto the idea that the universe is a simulation. Even with a device capable of hypercomputation (or, indeed, block transfer computations, for Doctor Who fans), simulating a universe is hard work.

The key question is one of reduction. If the smallest representation of the universe that permits accurate simulation is the universe itself, then building one is far more practical than uploading one into a HAL 9000 or an iPhone.

This is where precision comes into play. If space or time aren't quantized, you need infinite precision and a computer can't handle that.

https://en.wikipedia.org/wiki/William_Rowan_Hamilton
https://en.wikipedia.org/wiki/Broom_Bridge

It's also where quantum uncertainty comes into play. You see, an object cannot have a known position and a known velocity at the same time. There is a level of uncertainty that cannot be improved on because the information doesn't actually exist to be known beyond that. We also know, thanks to Sir William Rowan Hamilton as of 16th October 1843, that the universe is four dimensional. Spacetime. Three dimensions simply don't work. Physicists resent this result because they didn't think of it first or some such rot, and still refer to the universe as 3+1 dimensional. They are, of course, wrong in doing so. Fortunately, they dump the silliness in their calculations and get the right results.

Anyway, the consequence of spacetime and quantum uncertainty is that you cannot know exactly where something is in time. One interesting result of this is that you can use a femtosecond pulse laser to generate an interference pattern in time rather than space. Not sure of its usefulness, but anyways it plays havoc with a simulation. You'd have to simulate the widest band of time in which particles from past and future directly interact.

https://en.wikipedia.org/wiki/Planck_time

How much time is that? Well, assuming that time is granular and one grain is one Planck time, and assuming a femtosecond is the interval you need to cover, then you need to simulate 5.4 x 10^32 universes in order to not make any mistakes. That's a lot of universes.

Still, it's not an impossible number. It's finite. There are about a googol electrons in the universe and a googol is 10^100, much larger than 10^32. If you can simulate that many electrons, you can simulate that much of a chunk of time. Again, it helps if you can reduce the problem.

https://en.wikipedia.org/wiki/Pauli_exclusion_principle
https://www.theguardian.com/science/life-and-physics/2012/feb/28/1

This is where the question of reduction becomes... heated. At the core of it is an argument that amounts to Brian Cox vs Everyone Else. And if the simulation hypothesis is to be correct, Brian Cox has to end up being right. This is going to upset a lot of people and will be widely regarded as a bad move, to misquote Douglas Adams.

Basically, the Pauli Exclusion Principle says that particles such as electrons have to be in a unique state. Brian Cox' claim was that this must be true at all times, that you can't have an observer midway between two electrons observe the principle to be false. Everyone else says that's rubbish, information is limited to the speed of light in normal space. M-Theorists are still debating what the M means.

Brian Cox can be right, but it requires information to not travel in normal space. It requires all electrons to be cross-sections of a single M-Space brane with a defined structure. That last bit is important. If you rotate a shape, all protrusions rotate at the same time. You cannot rotate an L-shaped object and end up with equal length sides, an extra side or one of the sides twisted at odd angles. It renains an L.

Because all points of the universe are a few Planck lengths from any given brane, the total time it would take for information to traverse the brane and affect every electron is, well, considerably less than a femtosecond. Far too short a time to detect any discrepancies.

Now, if this is the correct model, then there are N degrees of freedom for the brane and therefore only only 2N possible changes that can be made in any given step. You can take as many steps as you like, but you can treat each one uniquely.

https://en.wikipedia.org/wiki/Coxeter_notation
https://en.wikipedia.org/wiki/Coxeter–Dynkin_diagram

From a computational standpoint, it replaces 10^100 distinct calculations with one 10^100 element Coxeter diagram and a single matrix calculation. That's a big reduction. The same thing is true for all leptons. If you can actually do this for everything in the particle zoo, it again becomes imaginable for an advanced civilization.

If Professor Cox is wrong, then every electron must be calculated individually, whether or not there are membrane effects to consider. This puts simulation out of the picture. The universe is too complex.

Well, ish. There is a debate as to whether physics is real or just an emergent property of mathematics. If it's an emergent property, then you only need to simulate the necessary system, not the physics. The physics result from the system existing, without any additional requirements.

Here, the question of computability or hypercomputability depends on whether the system is something amenable to simulation. The only obvious way you can avoid infinite recursion is if the system cannot be simulated within a simulation. However, that rather eliminates the value of a simulation as it removes an essential degree of freedom.

Asimov suggested two solutions to this debate:

http://multivax.com/last_question.html
http://asimov.wikia.com/wiki/Jokester
(the story is in PDF form, so I'm linking to the link for it)

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Awesome post! I'm finding that the bulk of the community here is too thick to understand matters such as this though.

You should look up the Church-Turinf-Deutch hypothesis. In regards to QC, it means simulated realities are informationally indistinguishable.

You're totally right about computationally modeling the universevper atom being out of the question.