News Science & Technology

Nvidia GeForce RTX 3050, RTX 3050Ti Laptop GPUs Announced

Nvidia has today announced the latest RTX 3050 and RTX 3050Ti laptop GPUs, bringing its most powerful GPU technologies to more mainstream gaming laptops. The new GPUs will power gaming laptops starting at just $799, and bring Nvidia’s new Ampere architecture, along with advanced RTX technologies to budget gaming laptops.

Mark Avermann, Nvidia’s director of product management for laptops, was quoted as saying “The latest wave of laptops provides the perfect opportunity to upgrade, particularly for gamers and creators with older laptops who want to experience the magic of RTX.” Further, he added that “there are now five times more RTX 30 Series gaming laptops that are thinner than 18mm compared with previous-generation RTX systems, delivering ground breaking performance with very sleek and portable designs.”

The new GPUs will bring dedicated ray tracing cores (RT cores) and Tensor cores to mainstream audiences and expand the number of RTX 30 series laptops to over 140. The new laptops with RTX 3050 and RTX 3050Ti GPUs will offer twice the performance of the last generation, along with support for 60fps gameplay at 1080p.

Powering these performances are Nvidia’s RTX technologies such as DLSS 2.0, Nvidia Reflex, and more. DLSS 2.0 offers higher frame rates in gaming, utilising the RTX 30 series GPUs’ Tensor cores. DLSS support is now available in over 40 AAA titles and indie games, including games like Call of Duty: Warzone, and Modern Warfare. DLSS will also offer advantages to creators by offering performance improvements in software like D5 Render, and Nvidia Omniverse. Moreover, Nvidia Reflex brings 144+ FPS and sub-25ms system latency in supported titles including Valorant and Overwatch.

Laptops with the new RTX 3050 and RTX 3050Ti will be available from manufacturers like Asus, Acer, Alienware, MSI, and others, this summer.

Windows Science & Technology

DDR1, DDR2, DDR3, And DDR4 RAM Memory: What Are Their Differences?

Since the introduction of RAM in DIMM (Dual In-line Memory Module) format, there are many types of memory that have been on the market, but since 2000 it is DDR RAM that has prevailed above the rest.

Here we will tell the difference between DDR1, DDR2, DDR3, and DDR4 since its inception in 2000.

It is true that DDR1 and DDR2 RAM are no longer in use, and in fact, DDR1 memory is long gone. DDR3 RAM is out of print, but many still use it, while DDR4 is already established in the market since its launch in 2014 and is currently used by all platforms.

But let’s see what differences we find between DDR1, DDR2, DDR3 and DDR4 RAM so that you can learn to differentiate these types of modules.


Technical Differences

DDR stands for Double Data Rate, and basically, it means that they are capable of two reads and two write tasks per clock cycle. This is what all generations have in common, but logically each new generation has been implementing changes and improvements that make them technically very different.


Launched in 2000, it did not start to be used until almost 2002. It operated at 2.5V and 2.6V, and its maximum density was 128 Mb (so there were no modules with more than 1 GB) with a speed of 266 MT/s (100-200 MHz).


Released around 2004, it ran at 1.8 volts, 28% less than DDR1. Its maximum density was doubled to 256 Mb (2 GB per module). Logically, the maximum speed also multiplied, reaching 533 MHz.


This release occurred in 2007, and it was a revolution because XMP profiles were implemented here. To begin with, the memory modules operated at 1.5V and 1.65V, with base speeds of 1066 MHz, but that went much further, and the density reached up to 8 GB per module.


This did not arrive until 2014, but today it is already the most widespread. The voltage is reduced to 1.05 and 1.2V, although many modules operate at 1.35V. The speed has been notably increased, and each time faster memories are released from the factory, but its base began at 2133 MHz. Currently, there are already 32 GB modules, but this is also being expanded little by little.

Physical Differences

Although these four types of memory are DIMM formatted and can look very similar in appearance (in fact, they are all 133.35mm long). There are fundamental physical differences whereby we will never be able to plug a DDR1 RAM module into a DDR2 socket.

All modules have an opening in the area of ​​the contacts that will prevent them from being connected to the sockets of another generation (and be careful because if you push too hard, you could break the socket or the RAM module).

DDR1 vs DDR2 vs DDR3 vs DDR4 RAM

In addition, DDR4 RAM memory modules have the contact area with a ridge in the center. It is not completely flat, although it is unnecessary because the incision would not allow us to connect a DDR4 module in a socket of another generation. In the image given below, you can see it with physical differences in each module.

DDR1 vs DDR2 vs DDR3 vs DDR4 RAM

Finally, it should be noted that in each generation, the number of contact pins has changed as follows:

  • DDR1: 184-pin (DIMM), 200-pin (SO-DIMM), and 172-pin (micro DIMM).
  • DDR2: 240-pin (DIMM), 200-pin (SO-DIMM), and 214-pin (micro DIMM).
  • DDR3: 240 pins (DIMM), 204 pins (SO-DIMM), and 214 pins (micro DIMM).
  • DDR4: 288-pin (DIMM), 256-pin (SO-DIMM). DDR4 micro DIMMs no longer exist.

RAM Differences In Performance

The most obvious differences between the different generations of RAM are in the performance. As technology has advanced, the performance has been gradually improved, and this has generally been doubling generation after generation.

Thus, there is an obvious difference between DDR3 and DDR4 RAM for example, and not only in practical terms but also in terms of the sensations that users appreciate when using a PC with one memory or another.

However, it is true, which also has to do with improving the rest of the components’ performance since the change from one generation to another of RAM is usually linked to a complete platform change.

Science & Technology

An Algorithm That Detects Deepfakes By Looking At Their Eyes

Researchers have developed an algorithm that detects deepfakes portraits by looking at the eyes and the reflection of light in them. Computer scientists at the University of Buffalo have developed this tool. Here you can read the full study document.

The key is in how the light is reflected in the eyes. This algorithm, with an effective rate of 94% in detecting deepfakes, analyzes the effects of light in the eyes of the subjects of the videos to determine if they are realistic or not.

This system analyzes, more specifically, the corneas of the eyes, which have a surface that reflects light in a similar way to how a mirror would. The idea is to determine if the reflective patterns of light are reflected in the same as real-world scenario.

Using logic, if we take as an example a photo taken of a person with a camera, we will see that the reflection of the eyes will be very similar in both since they are looking in the same direction. The deepfakes tend to skip the smallest details, as they are not able to achieve this kind of detail as variables.

Ironically, it is again the artificial intelligence system that this time is in charge of looking for the error, analyzing the face and the light reflected in each of the eyeballs, looking for these inconsistencies. Once it does, it generates a similarity score — the lower the score, the more likely the face in the image is a deepfake.

In addition to its functionality only being demonstrated in portraits, this AI is only capable of determining the aforementioned inconsistencies in the subject’s eyes. If these are not displayed, the system will not work.

In fact, if the subject does not look at the camera, it is likely that the system will deliver a false positive. Of course, this is all being investigated for future versions, but the important thing is that it will not be able to detect the most advanced deepfakes.

Science & Technology

Is Your Brain a Computer? The Two Sides of the Answer

Let me start with a disclaimer: we don’t know how the brain works exactly and we probably won’t know in the foreseeable future. We don’t know how the brain goes from neuronal activity to the diversity of human behavior. The methods that we have to study the brain provide tons of data but a unified theory of the brain doesn’t seem to be near.

But we do know some things about the brain.

We know how neurons and synapses work. We know about the brain’s functional and structural hierarchical organization. We know that it does parallel processing. We know that it uses very little energy to do all it does.

And we know it resembles a computer in many ways. But, is it a computer?

We’ll understand the brain by looking at computers

The metaphor of the brain as a computer has been discussed by neuroscientists and computer scientists for decades. And its controversy still holds today.

Some argue that the brain is, in fact, a computer. They usually subscribe to this idea not because of its inherent truth, but because of its usefulness. A computer takes inputs, processes them, and provides some outputs. The same does the brain. Thus, there may be some ideas we could take from computer science and use them to provide knowledge about the biological machine the brain is.

Gary Marcus, professor of psychology and neural science at New York University, argues that we could use notions from computer science to advance neuroscience. In a paper that was published in Science in 2014, he and his colleagues argued that a specific type of computer, a field programmable gate array, could function similarly as the brain.

In particular, this computer works as a set of reprogrammable building blocks that can take on different tasks. For example, as Marcus explains, one block could be in charge of vision, another would do arithmetic and another would process signals. They suggest that the brain may be similarly structured in elemental basic blocks. And that these brain primitives are what we should look for to understand the brain.

The opposite approach would be to try to map every brain process, from neural activity to human behavior, but Marcus criticizes this idea. As he puts it:

“It is unlikely that we will ever be able to directly connect the language of neurons and synapses to the diversity of human behavior, as many neuroscientists seem to hope.”

He argues that finding a robust middle ground is necessary to go from understanding the most basic level of brain processing to understanding the most complex and that those mental building blocks may hold the key for it.

We’ll understand the brain by looking at the brain

But some refuse the idea that the brain is a computer. Not just because they think it’s not true, but because it has no use in advancing our understanding of the brain.

There are some strong arguments against the computational notion of the brain, in particular against the most basic aspect of the metaphor: that the brain has a neural code. That is, it internally represents the external stimuli.

Instead, what we know is that there is a relationship between stimuli and neural activity. But we don’t know whether this activity represents the stimuli or not.

György Buzsáki, professor of neuroscience at New York University, says that, while a computer is passively taking information and representing it into a neural code, the brain is part of a human being that actively interacts with the world. The brain takes information to then search for possibilities to make sense of it. As Matthew Cobb explains in reference to Buzsáki’s argument:

“His conclusion — following scientists going back to the 19th century — is that the brain does not represent information: it constructs it.”

The detractors of the computational analogy say that computers are not a good source to look for knowledge of the brain, but no one is sure what the best approach to study the brain is. Some prefer to focus on developing better mathematical models. Others say that it’s better to study the brains of smaller organisms, such as the worm C. elegans. Others think that studying simple processes is the way to eventually understand the emergence of complex phenomena, such as consciousness.

But, what they all agree on is that it’s by studying the brain that we will understand the brain.

Concluding remarks

So what can we get from this debate?

The most fruitful approach is to disentangle the metaphor that the brain is a computer without taking any side a priori. It may have some value, so we should try to assess what that value is and assess its limits and scope.

Interestingly, there have been other metaphors for the brain before. One of the first ones is the hydraulic theory of René Descartes in the 17th century. He suggested that the brain produced the movements in the muscles by flowing animal spirits through tubes inside the body.

Why did he think this? Because he was interested in hydraulics. And there have been other brain metaphors in history, each corresponding to some new technology of the time.

These metaphors provided some utility. For example, they served as analogies in which science could base new experiments. But they all eventually lose that utility when science advances enough to surpass the limits of the metaphoric scope.

Once we develop stronger theories for the brain , or make discoveries, or invent better methodologies, the metaphor of the brain as a computer will be of little use. And then, a now inconceivable technology may appear to relieve computers as the best comparative for brains and to serve as the next paradigmatic metaphor.

In conclusion, we could say that the brain is, and isn’t a computer. Because, as Matthew Cobb says, a metaphor is always only partial in nature.