What about combination of Snapdragon + Intel?

This is going to sound weird, because the amount of required battery power would be really generous. However, I am wondering about this. From what I have read, everyone is thinking about ARM. Everyone wants to see if they can combine it with Windows without the possibility of an emulator. SOC has already been shown by WindowRT. So, for sure, it can run windows up to a point. In my opinion, SoC will work for a full Window.

But my idea is this. I believe this. When we “powered” down, we are simply either watching a movie or browsing. These tasks aren’t exactly intensive (A6 from an iPad can handle this easily). Even when we are watching (let’s say for theoretical shot) a 4K movie, the A9x (which is equivalent of old intel, can handle this perfectly). When we are working, we need it to be intensive. So, what if we combine both SoC + Intel (apple has already done this!)

SoC would handle anything that is not task intensive. Intel would handle all intensive tasks. Throw in a good graphic card (say Nvidia or AMD) for the graphic intensive rendering.

This is how I imagine it.
Set all tasks that deal with say browsing files, indexing (background), download, spyware (if you still run them), browsing the internet, download toward the SoC. This should ease the burdens of the actual Intel chips
Set all tasks that require intensive computing such as rending of pictures, rending a movies, intensive picture development, 4K movies watching, data analyzing tasks, etc…
Set all tasks that deal with graphic toward Nvidia or AMD such as your VR, 4K movies rendering, intensive picture development, etc…

I imagine the minimum requirement of memory for the entire system would be somewhere between 16 to 32GB. However, given that SoC (the one that can actually survive on simple of 1-2 GB), 18 should be the sweet spot.

What do you think?

1 Like

No we can’t and nobody has done that. ARM is a completely different architecture with different instruction set, you can’t just switch between different instruction sets. You need different program code. Not to mention switching on the fly…

Core M goes all the way down to 1W consumption when not intensively used. That’s just as good as ARM.


The idea is very cool.

Unfortunately is more a subject of research for a large company than it is for Eve.

Making them switch “on the fly” is probably impossible. Choosing which one to boot would be easier, but its usefulness would be very limited.
Having a “fast switch” with the display connected to a different board “on the fly” might be feasible but the two environments would still be almost completely isolated and I’m not sure how power management would have to be handled.

I think the best chance is Windows 10 running x86 on ARM processors. We’ll have to wait a year or so to see what they have in mind, but that could solve the biggest problems. Computing power I don’t consider a real issue as if you want raw horsepower you’ve got to go for a desktop.

What Apple does is combining the Intel processor to run your computer, with a small low power Qualcomm chip to drive a very specific set of features. In other words, the Qualcomm chip is used more or less as any other Texas Instruments or Fujitsu chip you can find in some PCI-E add-on cards (Ethernet, Modem, video acquisition… you name it).


…or you can go for an Intel laptop. There is no “good” and “bad”, it’s too subjective, but Intel processors will remain much more efficient here ad they don’t need emulation that slows them down.

Unfortunately, I think this would be a change to the Intel architecture. Apple did it with the 2 high powered / w2 low powered cores for the iPhone, but I don’t know if this could be transferred to a PC.

Perhaps you could have a bios setting to determine whether the PC should be throttled, thereby stepping it down to low power usage. This however could be very dangerous, with many people forgetting they throttled down or software unintentionally throttling down, thereby slowing down the whole thing. The switch for throttle up / throttle down could be a software controlled thing, but I believe has to be designed very well (for a full power (the normal setting of the V), extreme power (for a limited use extreme throttle within reason), and a down throttle (for a reduced cycle mode, not a reduced power mode))

Windows power settings already do what you need. By setting it to battery saver mode or a custom mode you created, you can limit the processor load to x%.

I’m not thinking of battery saver mode. I’m talking about a shift in processing usage. I am not just talking about raw horsepower. I am talking about something that could keep up with the changing of the times while allowing companies to effectively research computing powers better. For example, intel cpu lately has been outputing incrementally increasing in power + speed. Using benchmarks such as Antutu or geekbench or cinebench, we see that the computing has not increased much. Most of the time, we all have this idea … “what if we make a tablet that can work all day (24 hours) without a charge”

Pauliunas, I understand the windows power settings, and even with custom mode, you are talking about limit one processor load!

Flippo: I think the idea of two boards would be a way to solve this. Think of it like a re-route for when the activities are getting to be too much for one processor to kick in. Battery management is what needs to be figured out. Think of it like this. I’m going to use the macbook as an example. In the macbook pro, you get two graphic cards (granted they are easier to manage than what we are talking about). We have your daily driver (Intel graphic chip) and a dedicated graphic card (AMD). The two will switch off with each other whenever they need to. For example, when I watch a 4K video or render a 4K clips, I see the AMD taking over. It allows the computer to not slow down.

I understand we are talking about tablets. And most of consumers, unless I am wrong, rarely updates their tablets. And most of our tablets are not fully functional. For example, iPad can’t handle vlc recording (podcast recording) and physical recording at the same time. From that pure perspective, we have to use our laptops. But, I am holding my tablets, and the idea of the tablet was to be lazy and sends information anywhere without having to run back to your laptop. Then come the surface. While this is good, there is still limitation to what it can do. (some says it’s not a pure replacement to a laptop). To be honest, in 2009, I analyzed data using a $200 laptop for a research project. I got my results.

Now, it has been years since I did any data analyzing. I only started to become more amor with this idea. What if we combine chips? Think like one system is slave to other. In other words, we treat the SoC as purely on PCI-e (take a phone and put it in your tablet). Intel will run on start-up. Then, intel runs on the background quietly while SoC takes over (Think a graphic card that connect using thunderbolt). You can physically have the SoC does all the errands that you need. Each system has a thermal measurement. It gets heat up if it gets used alot. So, we tell the PCI-e to tell us what is the core temperature of SoC. We give it a range. If it looks like there is physically too much heat being generated, Intel takes over. SoC physically cannot take the pounding. Trust me, I have burned out an iPad. Then, we look at how much GB is being used on the build-in graphic. If we are within say 90% of it (before it hits 95%, it becomes really slow down at 95%), we need to shift all graphic management toward dedicated graphic. Physically, this can be done. And that’s how I am thinking of both systems.

To be honest, I have not code in a long time. It has been 8 years since I have done any data analyzing or coding. I am re-learning alot of it. So, what I am saying might not physically feasible. However, I am sure we can use the core temperature of the SoC to do this. Flippo, what do you think?

1 Like

@mcalbala or @vudnguyen35 whatever your nick is now :smiley:
You mentioned setting “throttle” amount, so I replied about throttling… Now I don’t understand what you’re talking about, at all. Can you be more specific?

edit: oh wow, sorry for some reason i thought you were the same guy xD
Anyway if you’re talking about mashing ARM and x86 together, I think I already explained why that is just not possible in any way, at all. It’s not because of some technical difficulties, but even with a billion dollars budget you just couldn’t do it because they’re incompatible. Doing something on an Intel processor and then switching that job to ARM would look a bit like trying to hoover your house with a kitchen sink. It just doesn’t work…

Oh, I am not @mcalbala…His name is Mark. I’m Vu. We’re two different people. I am not talking about throttling. I am talking no throttling, full powered.

Well, it’s just an idea and I am trying to see if this is possible. SoC has its own board. And, I think it might be possible to design a connector to read the board that the SoC is on as a graphic card or sound card etc… We treat the SoC as a graphic card (theoretically speaking). We have the intel as the master controller. Intel will be used on start-up. Once we get into the main menu, we then let intel decides what to do (like a king). If the system sees tasks such as say browsing the web, youtube, picture taking, etc…, intel will idle and the “graphic card SoC” takes over. As the SoC temperature heats up (I honestly have never measured the temperature at which SoC will heat up) to a certain threshold, intel will take over to load from SoC; this should decrease the temperature of the SoC. Since we could also have a soldered graphic card like AMD, if this is relate to image management, we can have AMD to take over if it is neccessary. We would do this by measuring GB from our build-in graphic. That’s my idea so far.

Essentially, the computer still runs on intel. SoC is simply there to ease the burden that the intel gets hit with. All the “simple” tasks will be handle by the SoC. All the crazy tasks that heats up the SoC will be handled by Intel. However, that is not the only problem. Battery management is another.

Since we cannot bring anything above 100Wh on a plane, So, 20,000 mAh would be the max we can use. So I am thinking 15,000 mAh. Surface pro 4 is 5547mAh (surface pro 4 is 9 hours). Most of the time for a regular user, people will last at least 24 hours before you need to charge. There is also 5000 mAh battery pack that people can buy that can give them another 9 hours, which should bring it to a total of ~30 hours before they need to charge again (it’s 33 hours, but I’m estimating around 30 hours). However, given that we have two chips + an AMD running, I am giving it the entire system about 20 hours at most; so with the power bank, we might be looking at 25 to 26 hours at most. This is still higher than anything we have ever seen.

I’m sorry, but @pauliunas was right in his first post. The complexity of the problem is on the level of Microsoft and Intel themselves, that’s one. Second: tasks of predictable nature are already often offloaded to fixed-function hardware, which is very energy inoffensive.

Third: no manufacturer that would need to be involved would want this.

It’s an idea that surface again and again, too.

Anyhow, ARM with it’s big.LITTLE designs is one step in this direction, but we are still talking about the same architecture. From x86 side, the argument was always that it’s easier and cheaper to simply get fat cores at the problem to process it faster and get them to sleep, unloading the, uh, load to fixed-function units.


Speaking of big.little vs Intel’s approach, I think while big.little works better right now, the Core M approach is more of a step to the future.

Big.little basically means just having 2 separate cores for light tasks… But then instead of having n versatile cores you have 2 weak cores and only n-2 cores. With Intel’s approach, you have n cores in total and they’re all versatile, e.g. can be powerful or idle low depending on the need. Of course the technology isn’t mature enough to show its full potential, so we’re not seeing the true benefits.

Then you can think of it from another perspective: what if we take n cores and just add 2 weak cores to the mix, resulting in n+2 total cores? Nowadays that isn’t a problem, but the number of cores you can fit into one chip is not unlimited, so in the future, this might impose further limits on the amount of “active” cores on one chip.

So, Intel’s approach might never even reach that point where it’s as effective as big.little in energy savings, but if it does, it will have another advantage of not occupying space with extra cores.

Furthermore, it also allows the CPU to scale more efficiently between high and low load, whereas big.little just splits workloads into “easy” and “hard” and switches cores according to those two simple categories. But if the workload is just a little bit heavier than what the “little” cores can handle, it still fires up the “big” power-hungry cores. In this case, Intel’s approach is beneficial as their optimization allows the CPU to scale well everywhere in between :slight_smile:

(just my 2 cents)

1 Like

I’m afraid I think you picture it to be easier than it really is, for many reasons.

See, ARM and x86 are two very, very different instruction sets and are not compatible one another. As a result, software running on one architecture cannot run on the other (unless emulated).
The result of this is that the two processors would need to run on two different instances of an OS, with different RAM and different storage. One microprocessor can’t “take over” as it just can’t understand the instruction set of the other one.

If you don’t accept the two systems to be isolated, as you suggest, then the only other option is to make your custom “hybrid” architecture (putting one processor on a card is a way, as you already noted) and your custom software, as you cannot have arbitrary software dynamically switch from one instruction set to the other.
This is basically the idea behind using a GPU to do general-purpose calculation: you can do it, but you need software to explicitly support the feature.

From a power-efficiency perspective, then, I would suggest the ARM to be the master and the Intel to be the slave and kick-in when needed. x86 processors are very power-inefficient even when mostly idle.

Anyway the bottomline of all this is that as said it makes for an excellent research project, but also an extremely difficult and expensive one to implement when it comes to making it generic enough to run on existing software. Nevermind production.
That’s why Microsoft is trying to have x86 run on ARM through an hardware-assisted emulation: it’s the only feasible way to have x86 software run on ARM, even though there is a toll in term of performance (which, by the way, is not excessive).
The market then, should there be interest in the feature, will create the push to (finally) make really high-performance ARM processors, something that to be honest I really hope as it’s unbelieveable that only Apple has so far managed to pull out something competitive on that side.

I think Intel’s optimizations are really promising, so we have to choose here: either expand ARM, make processors bigger and more power hungry, or make Intel processors even more efficient and hence less power hungry. Both of these options would result in one of the platforms taking over the other eventually, at least in some market segments. But if you choose the former, even though ARM processors would be powerful they would have to emulate x86 to run actual software, which is a waste of resources. If we choose to push Intel into mobile, we wouldn’t need to emulate anything as Android apps are mostly written in Java and most of them already work with Intel processors (can confirm that as a Zenfone 2 owner). I think it’s pretty easy to see that this is the better option. Sure the first option can happen too in the background, I don’t really care, but I’m positive it will die as soon as Intel reaches the required efficiency (and they’re not far at all).

The problem with x86 efficiency is not in the processor architectures, but rather in the basic concepts of its instruction set.

Intel did a great job at reducing this overhead in the past few years and even dropped part of the old x86 stuff… but unfortunately there is still a lot.

This is why ARM has always been more efficient: it’s designed from the ground up in a very consistent and energetically efficient way, the reason being it was targeting small devices.

The way I see it, it’s much easier to have emulation for “normal” use and then have a different architecture for raw horsepower up until the day the other architecture will become able to catch up. Then all bets will be off and we will start to see competition not only on processor brands but even on instruction sets.

If Microsoft really manages to do this emulation thing right, I think we’ll see very interesting changes in the subsequent few years…


This is @mcalbala, the V gets the performance by not throttling. Others who have less efficient cooling get machines that don’t overheat by throttling (running at a lower performance rating to run cooler). I am very glad that the V overdid the passive cooling apparatus so that they can run for extended periods without stepping down performance, or throttling.