Q. I am just wondering why the assembly languages for Windows and Linux are different since they are both based on the x86 architecture. And is there a basic assembly language that doesn't rely on an OS? And if so, what is it?
Ok, if they are the same, then why do the different OSs interact differently with it and is there a way to bypass the OS so that you can just program directly to the computer?
Ok, if they are the same, then why do the different OSs interact differently with it and is there a way to bypass the OS so that you can just program directly to the computer?
A. For a particular architecture (i.e. CPU type) the assembly language is the same. Realize that even modern intel CPUs have different instruction sets but in general most compilers stick to common instruction sets shared amoung all intel CPUs unless advanced instruction sets (aka extensions) are specifically enabled. MMX was one such extension.
Assembly really doesn't have much to do with device IO or drivers since assembly is *below* the operating system. In assembly, you write to or read from (poke/peek) memory locations which may be mapped to devices or be actual memory. If you are actually programming in assembly (not in a compiled language) then you'd just have to know what locations mean what. This "abstraction" is what the OS and compilers provide to higher level languages.
What you are confused about is binary executable format which *is* os dependent. Every executable binary contains information in it that helps the OS know how to run it and what it's called and so forth. Binaries contain the assembly instructions that needs to run but modern binaries are also often not self standing. The "assembly" code in a binary has to be "loaded" and run along with any other "shared" binary code fragments (i.e. shared libraries). This is where the portability breaks down. On the hardware abstraction side, there is also some runtime adjustments that the OS has to do in order to map the right locations into the assembly code. This is because hardware today are not in fixed predictable locations. The bios can move a device from one memory address to another at boot time or even at run time. This is basically what PnP is. It provides a way for the OS to modify how the bios positions hardware in memory to avoid conflicts. That is why if you are running an OS that is *not* PnP aware, you need to disable the feature in the bios so that the bios will initialize and fix those parameters for the hardware at boot time. Once the OS is done with relocation and device mapping, the "code" that is sent to the CPU is assembly and for all intents and purposes this code would be basically the same no matter what OS is used.
So what are some binary formats that are shared... well, ELF is one. Elf is used on various Unix and Unix-like OSs such as Linux and Solaris. Unfortunately binary compatibility is not a useful reality because it requires all the shared components to be compatible versions and format. In a sense, the WINE project in Linux is an attempt to emulate the Windows HAL (Hardware Abstraction Layer) in order to "load" a binary and do all of the necessary relocations for IO and shared components that windows does and then send it to the CPU. In a way, WINE is an attempt to create binary compatibility with windows "EXE" files.
Assembly really doesn't have much to do with device IO or drivers since assembly is *below* the operating system. In assembly, you write to or read from (poke/peek) memory locations which may be mapped to devices or be actual memory. If you are actually programming in assembly (not in a compiled language) then you'd just have to know what locations mean what. This "abstraction" is what the OS and compilers provide to higher level languages.
What you are confused about is binary executable format which *is* os dependent. Every executable binary contains information in it that helps the OS know how to run it and what it's called and so forth. Binaries contain the assembly instructions that needs to run but modern binaries are also often not self standing. The "assembly" code in a binary has to be "loaded" and run along with any other "shared" binary code fragments (i.e. shared libraries). This is where the portability breaks down. On the hardware abstraction side, there is also some runtime adjustments that the OS has to do in order to map the right locations into the assembly code. This is because hardware today are not in fixed predictable locations. The bios can move a device from one memory address to another at boot time or even at run time. This is basically what PnP is. It provides a way for the OS to modify how the bios positions hardware in memory to avoid conflicts. That is why if you are running an OS that is *not* PnP aware, you need to disable the feature in the bios so that the bios will initialize and fix those parameters for the hardware at boot time. Once the OS is done with relocation and device mapping, the "code" that is sent to the CPU is assembly and for all intents and purposes this code would be basically the same no matter what OS is used.
So what are some binary formats that are shared... well, ELF is one. Elf is used on various Unix and Unix-like OSs such as Linux and Solaris. Unfortunately binary compatibility is not a useful reality because it requires all the shared components to be compatible versions and format. In a sense, the WINE project in Linux is an attempt to emulate the Windows HAL (Hardware Abstraction Layer) in order to "load" a binary and do all of the necessary relocations for IO and shared components that windows does and then send it to the CPU. In a way, WINE is an attempt to create binary compatibility with windows "EXE" files.
How do unix and linux such as max os x run more efficantly then windows?
Q. I was using max os x today that someone brught in it was pretty cool but im not convinced its better then windows 7 for multitasking.
It feels more responsive but you cant multitask as easy.
It feels more responsive but you cant multitask as easy.
A. There are many misconceptions about Unix, Linux and, Windows. If you look at the hardware level of any computer, Unix and Linux will run more efficiently than Windows will. Unix and Linux will install exactly the drivers in each computer it is installed on, you don't need third party drivers, but in some cases that is the only way some of it may need to run. Linux can multitask and I'm sure Unix will also. Unfortunately Windows dominates the world computer use in the average home and most users will find Unix and Linux a little odd and to some point limited, only because third party vendors and developers make more money developing for Windows. Linux is a programmers and hackers dream, because of all the software you can install and not cost you anything but your time. I work with all three operating systems and find Unix is the hardest one to deal with, in most cases.
If Windows is what you like, then Windows is what it should be as a choice. It takes some time and playing around with Linux, before you will actually like it. Most people don't spend enough time with it and really aren't sure how to install all the software and programs. Sorry I got carried away, and probably didn't answer your question.
If Windows is what you like, then Windows is what it should be as a choice. It takes some time and playing around with Linux, before you will actually like it. Most people don't spend enough time with it and really aren't sure how to install all the software and programs. Sorry I got carried away, and probably didn't answer your question.
Is there a List of TV Tuners compatible with Linux MCE?
Q. Is there an existing TV Tuners that are compatible with Linux today?
A. http://sagetv.com/requirements.html
see the list here:
see the list here:
Nec Projector Review
Plastic Shed Reviews
Ati Graphic Reviews
Nurse Uniforms Reviews
Cabochons Reviews
Inflatable Water Slides Reviews
Barcode Scanner Reviews
No comments:
Post a Comment