Most of the cryptographic techniques used today are built on issues that are simple to compute but difficult to reverse. Like number multiplication or exponentiation, which involves determining which primes went into a product or which log yields an integer solution. That's some difficult math. This is particularly true when dealing
with really big quantities. After all, it's not particularly difficult to factor the primes in 36.
We want the numbers used while creating a cryptographic algorithm to be large enough, making them too complex and challenging for an adversary to crack. By this reasoning, accepting numbers that are "0" (zero) should be completely forbidden. However, Java 15 and later versions have merely given us that. The Elliptic Curve Digital Signature (ECDS) algorithm in Java 15+ just accepts the certificate with all zeros due to the vulnerability that was discovered. This indicates that authentication is totally flawed. Any entity has the ability to authenticate as whoever they choose to pretend to be. An extremely serious "oops," yet representative of the daily blunders and errors that occur in cryptography and cryptographic implementations.
It's hardly the first significant bug we've encountered this year. We also discovered that up to 100 million or more Samsung cellphones could have their "TrustZone"—the device's most secure component—compromised because a developer neglected to include a random number in the entropy necessary for the crypto process, instead providing a zero. The point is that mistakes in cryptography are frequently made by programmers.
Furthermore, many problems are challenging to find because the intended behavior is flawless. The programme operates as expected. We simply are unable to identify its flaws on a minor level.
We frequently discuss algorithmic flaws and the possibility for cutting-edge computing technology to defeat current cryptography, yet these methods—while serious—are not frequently used. The most common cause of cryptography failure is programmer error, and frequently, small errors are at fault. Coding errors and imperfect security measures have long been acknowledged in the computing community. Therefore, a defense-in-depth strategy is typically advocated, where access control and filtering are used at every level. We should be fine if we combine this strategy with the "least privileged access" doctrine.
However, we have a monoculture if each level of the defense-in-depth strategy depends on the same cryptography and authentication techniques. Monocultures can be attacked by a single vector. Consider the fact that the common cold can affect any human being. Defense in depth is essential, yet it falls short. Diversified security defenses at every layer are a requirement.
A diversification strategy, whether it be in crypto or not, can only be effective if it is additive and layers several security measures on top of one another. Using both symmetric and asymmetric cryptographic methods, moving key exchange outside of the band, and ideally employing a two-factor authentication scheme are all examples of this in the cryptographic field. Experienced security architects should always be adept at identifying single points of failure across all of their chosen protections. Then, with confidence, we may acknowledge the fact that no computer system operates in a completely safe manner.
Your post was upvoted and resteemed on @crypto.defrag
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit