The race for the fastest processor on the market
has long been focused on increments of the processor core count, rather
than enhancement of the processor speed. On that note, today, Sunnyvale
California-based Advanced Micro Devices launched its latest series of
Opteron processors, codenamed Magny Cours. The introduction of the
much-anticipated platform has put AMD in the position to provide its
costumers with a choice for 8 core and 12-core CPUs, designed to
increase performance in both memory and compute intensive workloads.
"As AMD has done before, we are again redefining the server market
based on current customer requirements,” said Patrick Patla, vice
president and general manager, Server and Embedded Divisions, AMD. "The
AMD Opteron 6000 Series platform signals a new era of server value,
significantly disrupts today’s server economics and provides the
performance-per-watt, value and consistency customers demand for their
real-world data center workloads.”
In addition to the release
of the new processors, AMD also announced the introduction of its 5600
Series chipset, featuring I/O virtualization. The new platform offers a
number of benefits, including more memory, thanks to the 12 memory DIMMs
per processors, an enhanced memory controller with support for 4
channels of DDR3 memory. The chipset will also provide Quad 16-bit
HyperTransport 3 technology links with up to 6.4 GT/s per link and PCIe
2.0 support.
The new processors are designed to accommodate the
2- and 4-socket enterprise servers, providing a choice for the
industry's first 12 core and 8-core processors, featuring L2 of
512K/core and 12MB of shared L3 cache. There's also AMD's CoolCore,
PowerNow, CoolSpeed, APML technologies, Enhanced C1 State and more.
With the release, AMD revealed that the platform would be compatible
with the company's upcoming server processors, based on the anticipated
"Bulldozer” core. Thanks to its overall specifications and pricing, the
new platform is expected to be adopted by Acer, Cray, Dell, SGI and HP.
Not so long ago, it was revealed that Intel planned
to introduce, sooner or later, a new dual-core processor with a clock
speed of 3.6Ghz. This model, however, will likely be meant for the high
end, which means that the Santa Clara chip maker will have to develop a
different model for the mainstream. Granted, the existing mid-end
offering, known as the Core i5 650, already has the more-than-decent
clock speed of 3.2GHz, as well as a cache memory of 4MB, but it is
limited in terms of overclocking. Fortunately for budget-conscious
overclockers, however, the company seems to already be working on a new
part.
According to Fudzilla, which
is also the source of the rumors regarding most of Intel's currently
known future chip plans, the CPU maker is devising the Core i5 655K.
This chip, while having the same 3.2GHz frequency as the i5 650, will be
"completely unlocked” and, thus, will be able to push higher in
overclocking scenarios.
The chip will have a 32nm central
processing unit and a 45nm graphics core on the same die. It will be
compatible with socket LGA 1156 motherboards and will supposedly come at
a premium price compared to the Core i5 650.
The central
processing unit is said to have a thermal design power (TDP) of 73W,
though it will likely consume more depending on how far end-users are
willing to push it. Considering these capabilities, it makes sense to
think that the final price tag will be higher than its predecessor's.
Currently, the i5 650 carries a tag of $194.90 in the United States and
is priced at 150 Euro in Europe. Obviously, the newcomer will ask for a
little more. Unfortunately, the exact price is not specified.
If the rumors prove valid, Intel will officially introduce its new
dual-core mainstream processor by the beginning of June 2010.
Electronic Arts suggest that rather than Sony and Microsoft releasing
a PS4 and XBox 720 that they will actually release in-between models
such as a Playstation 3.5 and an XBox 560. The reason for this is that
Sony and Microsoft didn’t get the ball rolling as well as previous
generations and that 2011 is just too quick for next-gen consoles to be
launched.
For that reason it is expected that Sony and Microsoft will launch
these mid updates to the hardware to extract more out of them.
What are your thoughts on this? Would you like to see another
PS3/XBox 360 being created as a stepping stone to the real next-gen
consoles? I do think there is a lot that can be done with the current
generation of consoles, but 2 – 3 years is quite a while away still and a
lot of advancements in technology will happen in this time period. It
will be great seeing what path the 2 console makers are going to travel.
A few minutes ago, the Ubuntu development team
unleashed the first Beta version of the upcoming Ubuntu 10.04 LTS (Lucid
Lynx) operating system, due for release in late April this year. As
usual, we've downloaded a copy of it in order to keep you up-to-date
with the latest changes in the Ubuntu 10.04 LTS development.
What's new in Ubuntu 10.04 LTS
Beta 1? Well, as everybody already knows... Ubuntu 10.04 LTS Beta 1 has a brand-new look,
composed of two new themes (Ambiance and Radiance), one is dark and the
other one is light. Click the link above to access a very nice article
we created last week, to showcase the new themes, logos, font,
boot splash and boot prompt. However, after installing the
proprietary Nvidia video drivers, the boot splash screen has been
changed to what you see below...
The hype surrounding NVIDIA's upcoming GTX 470 and
480 graphics cards has reached such high levels that even Intel's
recently launched six-core Gulftown got less attention than the various,
and mostly information-deprived, leaks related to the adapters. So far,
mystery has continued to shroud the actual specs of the cards, but it
seems that the long-awaited moment when they are revealed has finally
come.
What the two boards have in common is their support for three-way SLI,
CUDA support, 3D Vision Surround technology, PhysX implementation and
identical video output options, namely dual DVI and HDMI. This feature
set adds to the already obvious support for all DirectX 11 graphics
features.
The online entity that managed to get a hold of the GTX
470 and GTX 480 product specifications is Expreview
and, from what they look like, the architecture used in the cards'
construction is different from the one NVIDIA has developed so far. The
main curiosity lies in the unusual performance numbers, which, while
somewhat lower than the strongest competing products from AMD, hint at a
possibly different approach to graphics processing. The GeForce GTX 470 has a GPU clock of 607MHz, 448
CUDA Cores, a shader frequency of 1215MHz and 1280MB of GDDR5 memory,
This VRAM operates on a 384-bit memory interface and has a frequency of
1674MHz, or 3348 MHz DDR. The card also has a thermal design power (TDP)
of 225W and draws the necessary energy not just from the PCI Express
slot, but also from the power supply, through two PCI Express power
connectors, of which one has six pins and the other eight.
The
more powerful GeForce GTX 480 has 480 CUDA cores and its graphics
processing unit runs at 700MHz. Also, the amount of memory is confirmed
at 1536MB, runs at 1,848 (3,696) MHz and has the same 384-bit interface
as the GTX 470. Furthermore, the shader frequency is set at 1,041MHz and
the TDP is of 295W. Power is drawn through the same six-pin + eight-pin
power connector combination.
It is still unclear why and how
NVIDIA gave its graphics adapters such unusual clock speeds, but all
will be made clear once the company makes the formal announcement on
March 26. Until then, end-users will be able to happily contemplate the
rather decent price tags of $349 for the GTX 470 and $499 for the GTX
480.
Intel has already launched its Gulftown six-core
central processing unit, which is essentially the first such CPU to make
an appearance on the consumer market. Of course, six-core chips aren't
exactly a new concept, but they were not introduced sooner because they
were not needed. Now, with applications starting to take more and more
advantage of multiple cores, AMD is also gearing up to release its own
hexacore chip, codenamed Thuban. This chip’s specs, however, seem to
have remained a mystery. Until recently, that is.
Hexus reports that
a leaked Gigabyte CPU support list has some info on the upcoming Thuban
CPU, which will supposedly debut in at least two versions, namely the
1055T and the 1035T.
These chips will likely be the slower of the
set, the former being clocked at 2.8GHz and the latter at 2.6Ghz.
Obviously, these speeds are quite inferior to those of the Intel Core
i7-980X, which runs at 3.33Ghz, but Advanced Micro Devices is expected
to also release such speedy units, like the 1075T that should achieve at
least 3Ghz.
Unfortunately, the leak did not have any other
information on the Phenom II X6 CPUs, which means that the cache memory
and thermal design power, among other things, will have to remain a
mystery for now. There is also no word on the exact date when the chips
will make their debut, nor is the actual price point known. Of course,
considering the clocks, they will likely be more affordable that the
Gulftown that bleeds about $999 out of any wallet it comes across.
The
Thuban will somehow have to make up for the time gap between its own
and the Gulftown's launch. Fortunately for AMD, there aren't many
applications that can take advantage of all six cores, nor are there
many enthusiasts willing to buy Intel's processor. This will give an
edge to the Sunnyvale-based company, especially if its processors turn
out to be more affordable.
Fallout New Vegas has the same Fallout 3 engine and 90% of the original F3 features, but this time the maps is located in vegas and there are a lot of new things!! :
Following a series of rumors
that have been making the rounds on the Internet for a while now, it
appears that Intel's much-anticipated 6-core Gulftown processor.
Designed for the enthusiast market, the new Core i7 CPU will be part of
the company's 32nm-based line of processors. On that note, reviews of
the new Core i7 980X have just went online at various hardware sites.
This is clearly an indication that Intel will soon be announcing these
new processors and make them available for enthusiast consumers.
Reviews of the new processors have surfaced
on severalwebsites,
consequently providing details about the specifications and features
that have been enabled in Intel's new processors. According to details,
the new processor comes with a factory-set core frequency of 3.33GHz,
the same as the previous Core i7 975 model, featuring a 4-core
architecture. The CPU is featured with Intel's SMT (simultaneous
multi-threading) technology, which means that the 6 cores will be seen
as 12 threads, by the operating system.
In addition, the
processor is also featured with 12MB of L3 cache, up from the 8MB of L3
cache available on the older, Bloomfield Core i7 975 model. There's an
integrated memory controller, the addition of 7 new SSE4 instructions
and dual QPI, with up to 25.6G
The new processor is
also part of the outfit's Tick-Tock strategy, with the "Ticks”
representing the adoption of a new process technology, while the "Tocks”
being focused on providing a new microarchitecture. This one fits in
the "Tick” section, becoming Intel's first 32nm desktop CPU with 6 cores
and 12 threads. In addition, the new model will be compatible with
Intel's current high-end platform, X58, which was introduced back when
the first Bloomfield processors were announced. Based on the X58
chipset, motherboards featuring this platform should work just fine with
Gulftown, providing a BIOS update is applied.
Intel is yet to
formally announce the product, but we are expecting it to become
official within the next couple of hours, with a high chance for
enthusiasts to quickly get one for their own system. Price wise, we are
looking at a CPU that should be available for US$999, the same as the
Bloomfield Core i7 975. We should note that the Core i7 975 and the 980X
will also share the same power consumption specifications.
Even though humanity might dream of an idyllic
future where there is no strife, the present is very much full of
situations where various personas trade barbs in the hopes of gaining
the upper hand in what they perceive as a heated competition. While the
longstanding feud between Intel and Advanced Micro Devices can be said
to hold the top position among corporate rivalries, the second place
almost as easily falls to the relationship between the latter and
NVIDIA. Not long after AMD accused
NVIDIA of deliberately reducing the functionality of its PhysX
technology, the chip maker follows up with the claim that the Santa
Clara GPU maker has the habit of bribing game developers.
According to what Richard Huddy, AMD’s senior manager of developer
relations in Europe, said in an interview
with Thinq.co.uk, game developers implement PhysX in their games not
because they want to, but because it is implied by the marketing deal
they have with NVIDIA. According to Huddy, game developers, with the
exception of Epic, don't actually want PhysX but they end up using it
after all because NVIDIA pays them to.
"What I have seen with physics, or PhysX rather, is that Nvidia create a
marketing deal with a title, and then as part of that marketing deal,
they have the right to go in and implement PhysX in the game. The
problem with that is obviously that the game developer doesn’t actually
want it. They are not doing it because they want it; they’re doing it
because they are paid to do it,” stated Huddy.
"I am not aware of any GPU-accelerated PhysX code which is there because
the games developer wanted it with the exception of the Unreal stuff. I
don’t know of any games company that’s actually said ‘you know what, I
really want GPU-accelerated PhysX, I’d like to tie myself to Nvidia and
that sounds like a great plan’,”he added.
AMD's representative also said that NVIDIA's PhysX will be short lived,
because it is not an open standard and, as such, will lose face in front
of an emerging rival technology.
"I think the proprietary stuff will eventually go away. If you go back
ten years or so to when Glide was there as a proprietary 3D graphics
API, it could have coexisted, but instead of putting their effort into
getting D3D to go well, 3dfx focused on Glide. As a result, they found
themselves competing with a proprietary standard against an open
standard, and they lost. It’s the way it is with many of the standards
we work with,” said Mr. Huddy.
All that is needed now for the completion of this new episode in the AMD
vs. NVIDIA saga is the latter's response to the accusation, which will
likely come soon, as was the case with the Santa Clara GPU maker's last
reply.