Pooya's General Simulation Hypothesis™: Comprehensive, and Scientific
COPYRIGHT © 2020 GENERAL SIMULATION HYPOTHESIS ™ - ALL RIGHTS RESERVED.
COPYRIGHT © 2020 GENERAL SIMULATION HYPOTHESIS ™ - ALL RIGHTS RESERVED.
We provide consultations as one on one meetings, conferences, and live or online presentations .
For our consulting/presentational events whether online or in person please click below to discuss: generalsimulationhypothesis@gmail.com
Our approach and hypothesis is novel, exciting, and thorough. We can open doors to new approaches to science, technology, and even everyday life.
The goal of this website is to expand on General Simulation Hypothesis©™ (a novel scientific theory postulated by Pooya Hejazi which is distinctly different from the philosophical approaches that are prevalent) in a manner that makes it accessible to scientists deliberating the subject matter as well as general population that has interest in the theory.
The website shall start below by presenting the theory in a concise fashion and go on to expand in a scientific format but laymen language. The main goal of the publication is to further expand on Pooya's General Simulation Hypothesis©™, in order to put to rest imaginative concepts that are presented as pseudo-science but are easily rejected scientifically.
Though some aspects of the theory can create controversy both generally and scientifically, we believe those without personal or collective financial incentives against it would be thrilled to consider the novel approach that is taken in this publication.
An additional writing by the author from 2018 is also referenced as it was the pre-curser to this publication. Furthermore, a video presentation is also made that fleshes out the theory in a live manner and can be sent to audiences that prefer that format at a reasonable fee.
The Theory in a Nutshell: The physical universe that we live in is a falsifiable, yet very convincing simulation of, reality run on an enormously powerful computational platform that is limited in capacity and has some distinct characteristics. Though users from outside the simulation can theoretically be inserted into the simulation, the main concern of the theory is the simulation itself from the prospective of living observers, most notable of which to us would be individual humans.
Abstract: Based on all the known physics and gathered evidence accumulated by mankind to date, the physical universe is a holographically projected quantum simulation run on an enormously powerful computational platform with strict limits and distinct characteristics. Local and non-local reality is falsifiable as space is illusory, yet time is not. Most importantly, there exists no space-time fabric as time is an inherent and fundamental aspect of the simulation by design, where as space is arranged in such manner as to expand and contract in order to give any observer a consistent sense of local reality. The constancy of the speed of light is to give a false sense of universality/universal reality to the observer no matter how the space is contracted or expanded. Additionally, the projection of simulation could inherently involve electro-magnetic waves in some fashion. We consider that a low-probability reasoning factor for the constancy of speed of light, even if the holographic projections did potentially involve utilization of electro-magnetism and the propagating waves in one form or another.
Background: Historically humans having been searching for fundamental elements that constitute our "physical reality". Yet as we have progressed from the four elements of wind, fire, earth, and water to the more scientific molecules and atoms, which ironically meant indivisible, we confront oddities from multiple aspects. First there is no solid core. Matter in solid state used to be considered compact, then it was discovered that formation was of molecules, that themselves consisted of atoms. The atom was considered an electron cloud with a heavy nucleus. Then the nucleus was made of fundamental particles that are typically massless or ultralow-mass subatomic particles (up quark is estimated to weigh 0.214% of the mass of a proton and down quark is estimated to weigh 0.510%). The finer scale we look at the building blocks of life the more obvious it becomes that we are not dealing with solid particles but rather entities and sub-entities that operate and arrange themselves according to certain guidelines that redefines the space around them, akin to attributes of a parameter in a computer program that is strictly defined by virtual assignment of values through the code of the simulation. This gets really strange when we consider that formulas used for describing the behavior of subatomic particles are akin to those used for error-correction on websites. This is strongly supportive of lack of space continuum, but rather a segmentation of space along a grid. This also would go a long way to explain the expansion of space itself at the beginning of the big bang, as in a simulation the grid itself can be expanded.
The other telltale sign is the lack of continuity and the quantum nature of fundamental particles and energy levels. There is no logical reason that in an objective reality a continuum of energy levels would not be allowable (probably for space and particles too). Yet the only way we can rationally justify the quantum levels is to reduce computational burden of the simulation if as a consequence there is no impact on the objective of the simulation or its ability to convince the observers that the simulation is real.
Next we run into Schrodinger's probability wave function, which lowers the computational burden by not having to keep track of individual particles and maintain a collective wave function and randomly assign "correct" values if the wave function is in need of a collapse. This strangeness that was subject of intense debate and wonder by Schrodinger, Einstein, and Bell is another reason behind the physics we observe being a simulated reality, as it establishes that particles have no inherent values and rather a possible distribution of values. An obvious example of this is quantum tunneling where a particle physically is not supposed to cross a barrier, but so long that there is an extension of the wave function passed the barrier then, through the random assignment of values at the collapse of the wave function, the stochastically assigned values based on probabilities in the simulation lead to a certain portion of the particles ending up with a location passed a barrier they should have never been able to cross - only coming about by extrapolations of the continuum path that happens to be at the other side of barrier and hence allowable at the next frame of simulation.
Finally we run into quantum entanglement which blatantly defies space and not only exceeds the speed of light it is simultaneously in real time. That is not possible by any physical explanation, and can only be described as a shunt/link in the simulation's code that projects simultaneously to both particles, no matter where they are in the universe.
Theory Explaining the Aforementioned Characteristics of the Physical Universe: The distinct characteristics of the physical universe are best explained by a holographic projection created by quantum simulation using an enormously powerful computational platform. If the fabric of space can expand or shrink that could only mean that it is illusory as it is rescaled under certain conditions by the simulation software. Quantum entanglement is a shunt through the simulation code. The none continuous quantum nature of physics is removal of unnecessary parameters and values that do not seem to affect the goals of the simulation which in turn reduces computational burden. The wave function is also a mechanism to relieve computational burden by keeping range of values for range of particles and randomly assigning them at the collapse of the wave function.
The Most Crucial Aspects of the Simulation: There are several key aspects to the simulation that gives rise to the physical universe:
1. Refresh rate: also known as Frames Per Second (Fps) this is what gives rise to our experience of time as time would the inverse of localized refresh rate for the observer. Another word, there is an FpsMax which is the maximum refresh rate frequency and the localized refresh rate gets defined as: Fps = FpsMax/ Number of local Entanglements. This would mean that time segments (δt) would grow as would the number of entanglements leading to time dilation as postulated by Einstein's Theory of General Relativity.
2. Wave function collapse points in time: the wavefunction collapses at each refreshed frame so long as there is a distinct effect from the local collapse on the trajectory of the simulation. There is no need to collapse the wavefunction if the overall trajectory of the simulation is unaffected. The wave function is deployed in the simulation to save computational power and unnecessary collapses would seriously diminish its utility towards the intended purpose.
3. Mechanisms employed to keep the observers in the dark or convince them that simulation is real: A) illusory pretense of local reality B) illusory pretense of universal reality C) hiding the computational limits of the simulation D) heavy extrapolations utilized to make the simulation feasible by reducing the required computational power.
A). This is achieved through very small incremental changes that are along the projected path continuum at each refreshed frame to give a sense of continuity to the observer.
B). This is achieved through speed of light staying at a universal constant.
C) This is achieved through recasting of the spatial dimensions as formulated by General Relativity to compensate for the reduced number of frames refreshing the observer leading to time dilation. The time dilates in an exact inverse formulation to the recast that shrinks the space leading to no observable change from the observer's frame of reference.
D) As the number of entanglements increase, the simulation uses more and more projections/recasts of the spatial dimensions along the projected continuum because the increased computational burden leads to time dilation which is basically lagging of the computer projection in certain locals. A great example of this lag's existence is red-shifting of light that comes about by keeping the speed constant and extrapolating the next point along the wave, this can be posed and explained in two distinct ways: In the first approach, since space is contracted where time is dilated, keeping the speed of light constant (from any frame of reference while having a time dilation due to reduced the Fps locally) would force the light to travel a longer distance in space which would increase its wavelength leading to decrease in its frequency hence a red-shift. Another way of explaining this would be to consider that the refresh rate is dropping and the sampling of the wave drops below Nyquist rate and the observer ends up experiencing/witnessing the lowest frequency light that would/could constitute the same discreet points of propagation that were generated under the new slower refreshing of the frames by the simulation.
Corrections to Current Theories by Other Scientists: Currently scientists are looking to observe anisotropies as proof of our physical universe being a simulation. The biggest anisotropy of all has already been observed proving the theory in our opinion. That is the anisotropy of time. There is no necessary justification for time running backward or forward for any particular reason according to physics giving rise to theoretical possibility of time travel, and the misconception termed space-time fabric/continuum. Time is the inherent experience of the frames of the simulation projection. It is not a continuum of space, but as explained above dilation of time (localized lag of the simulation due to increased entanglements )is simply compensated by recasting the space in an exact inverse fashion to prevent the observer from witnessing blatant inconsistencies within the simulation and realizing he/she is in one. So the anisotropy of time is proof that we are in a simulation. Secondly though Einstein's General Theory of Relativity correctly formulated the time dilations and space contractions, it fails to realize that they are made to counteract each other through simulation's formulations and codes to keep the observer in the dark, and makes an erroneous conclusion that they are inherently linked and time is allowed to run backwards in a similar fashion as to movements in space, but that would run contrary to the whole logic of running a simulation. Though a simulation can run in parallel, be repeated (since there are stochastic assignments of values giving rise to a set/series of outcomes that differ from one run to another), or even be reviewed or replayed for examination; we foresee no logical point in allowing the simulation to run backwards, either purposefully as interference or by design through the code, as this would create a nulling effect with no useful results at all or end up with results that could have easily been achieved by other means/provisions for the simulation or in the simulation code. Furthermore, if the observers are allowed to witness "reality" in a continuum they could witness the backward run of time as the ultimate anomaly of the "reality" experienced and immediately know that they are in a simulation.
Sign up to hear from us about specials, sales, and events.
Copyright© 2021 General Simulation Hypothesis™ - All Rights Reserved.
Powered by GoDaddy Website Builder