os212

Homepage GitHub LOG LINKS Rank Friends Key

LINKS

Week 1

1. Install VirtualBox on Debian 11 Desktop

In this tutorial, you will learn how to install VirtualBox on Debian 11 desktop version. VirtualBox is a general-purpose full virtualizer for x86 hardware, targeted at server, desktop and embedded use.

2. How to Use VirtualBox: Quick Overview

This blog post explores how to use VirtualBox and contains the following sections:

How to set up VirtualBox? Enabling CPU virtualization feature Downloading the VirtualBox installer Running the installer and defining the installation options Deploying a new VM Creating a virtual machine Installing a guest OS Shared folders and clipboard Making a VM copy Using snapshots How to use VirtualBox for recording video inside the VM

3. Virtual Memory

A computer can address more memory than the amount physically installed on the system. This extra memory is actually called virtual memory and it is a section of a hard disk that’s set up to emulate the computer’s RAM.

Week 2

1. Simple Steps for Internet Safety

National Cyber Security Awareness by FBI, In today’s digital world, online safety should be of paramount concern for all individuals and organizations because the threats posed by cyber criminals can’t be ignored. And to counteract these threats, there are steps you can take to minimize the risks associated with doing any kind of business online, surfing the Internet, and/or sharing information on social media sites.

2. What is Malware

Malware, short for “malicious software,” refers to any intrusive software developed by cybercriminals (often called “hackers”) to steal data and damage or destroy computers and computer systems. Examples of common malware include viruses, worms, Trojan viruses, spyware, adware, and ransomware. Recent malware attacks have exfiltrated data in mass amounts.

3. Is Private Browsing and VPN Really Secure?

Whether you run a business or go online for yourself, you probably know that browsing the web can open you and your organization up to all sorts of risks. By connecting to the internet, you expose yourself and your business to hackers and thieves, who could steal anything from personal information and web browsing history to payment data. So, when it comes to protecting yourself and your business online, you may have looked into private browsing or choosing a VPN. But which of these is right for you?

4. Is Bitcoin Safe?

Bitcoin has the most crime reports of any cryptocurrency, which makes sense since it’s also the oldest and most-widely held crypto. Beyond digital crimes, Bitcoin’s safety as an investment is often questioned thanks to the frequency and scale of its value fluctuations.

5. Is Linux more secure than Windows?

While neither Linux or Windows can claim to be 100% bulletproof, the perceived wisdom is that Linux is more secure than Windows. We try to find out if that’s the case.

6. Learn C

This course is for total Beginners, you will learn how to code using the C Programming Language in an easy, simple, and efficient way.

7. Learn C (again)

Whether you are an experienced programmer or not, this website is intended for everyone who wishes to learn the C programming language. There is no need to download anything - Just click on the chapter you wish to begin from, and follow the instructions.

8. Why is the C language important?

Despite the prevalence of higher-level languages, the C programming language continues to empower the world. There are plenty of reasons to believe that C programming will remain active for a long time. Here are some reasons that C is unbeatable, and almost mandatory, for certain applications.

9. Cybersecurity

cybersecurity adalah proses perlindungan sistem, data, jaringan, dan program dari ancaman atau serangan digital. Biasanya, serangan ini dilakukan oleh pihak tak bertanggung jawab untuk berbagai macam hal. Beberapa contohnya adalah mengakses informasi sensitif, atau bahkan mengubah dan menghancurkan data penting.

10. The Importance of Digital Security

Cybercrime cost the world around $600 billion in 2017. Further, a report suggests that 63% of Indian businesses are concerned about falling prey to cybercrimes. Lack of cybersecurity is posing a great threat of data theft or unethical hacks in organizations.

Week 3

1. 10 Commands to Check Disk Partitions and Disk Space on Linux

In this post we are taking a look at some commands that can be used to check up the partitions on your system. The commands would check what partitions there are on each disk and other details like the total size, used up space and file system etc. Commands like fdisk, sfdisk and cfdisk are general partitioning tools that can not only display the partition information, but also modify them.

2. Working with tarballs on Linux

Tarballs provide a versatile way to back up and manage groups of files on Linux systems. Follow these tips to learn how to create them, as well as extract and remove individual files from them.

3. What Are Some Reasons You Would Use Disk Partitioning

Partitioning a disk can make it easier to organize files, such as video and photo libraries, especially if you have a large hard drive. Creating a separate partition for your system files (the startup disk) can also help protect system data from corruption since each partition has its own file system.

4. What Is NFS? Understanding the Network File System

A Network File System or NFS is necessary for helping your business share files over a network. An NFS is a protocol that lets users on client computers access files on a network. You can access remote data and files from anything that links to the network you will use. All people within a network will have access to the same files, making file-sharing efforts easier.

5. What Is NTFS and How Does It Work?

NT file system (NTFS), which is also sometimes called the New Technology File System, is a process that the Windows NT operating system uses for storing, organizing, and finding files on a hard disk efficiently.

6. Disk Partitioning in Linux

Disk Partitioning is the process of dividing a disk into one or more logical areas, often known as partitions, on which the user can work separately. It is one step of disk formatting. If a partition is created, the disk will store the information about the location and size of partitions in the partition table. With the partition table, each partition can appear to the operating system as a logical disk, and users can read and write data on those disks.

7. What is a Bash Script?

This page is mostly foundation information. It’s kinda boring but essential stuff that will help you to appreciate why and how certian things behave the way they do once we start playing about with the fun stuff (which I promise we’ll do in the next section). Taking the time to read and understand the material in this section will make the other sections easier to digest so persevere and it’ll be well worth your time.

8. Virtual Storage

As the virtual machine will most probably expect to see a hard disk built into its virtual computer, Oracle VM VirtualBox must be able to present real storage to the guest as a virtual hard disk.

9. Linux File System

A Linux file system is a structured collection of files on a disk drive or a partition. A partition is a segment of memory and contains some specific data. In our machine, there can be various partitions of the memory. Generally, every partition contains a file system.

10. Linux Filesystem Hierarchy Standard

Filesystem hierarchy standard describes directory structure and its content in Unix and Unix like operating system. It explains where files and directories should be located and what it should contain.

Week 4

1. Clear RAM Memory Cache, Buffer and Swap Space

Like any other operating system, GNU/Linux has implemented memory management efficiently and even more than that. But if any process is eating away your memory and you want to clear it, Linux provides a way to flush or clear ram cache.

2. Memory Utilization in Linux

inux is an awesome operating system. It performs good with fewer resources and tries to maximize utilization of available resources automatically and because of this, it’s slightly difficult to understand resource utilization.

3. Find Out the Total Physical Memory (RAM) on Linux

ometimes, we might need to check for total memory size on a server running Linux or we might need to use memory stats in shell scripts. Fortunately, we have access to numerous tools that we can use to check for total physical memory. In this tutorial, we’re going to take different approaches to serve that purpose by using several useful commands and tools.

4. Free vs. Available Memory in Linux

At times we will need to know precisely how our Linux systems use memory. This article will examine how to use the free command-line utility to view memory usage on a Linux system. In doing so, we will clearly define the difference between free vs. available memory on Linux systems.

5. How to Use Pointers in C

In C, learning pointers is simple and enjoyable. Certain Programming language activities are easier to complete with pointers, while others, like dynamic memory allocation, seem impossible to complete without them. To be a competent C developer, it is thus beneficial to understand pointers. Within C, a pointer is a variable that holds the location of some other variable.

6. C development on Linux

You probably know that operating systems deal with addresses when storing values, just as you would label things inside a warehouse so you have an easy way of finding them when needed. On the other hand, an array can be defined as a collection of items identified by indexes. You will see later why pointers and arrays are usually presented together, and how to become efficient in C using them.

7. What Is Little-Endian And Big-Endian Byte Ordering?

Computers store data in memory in binary. One thing that is often overlooked is the formatting at the byte level of this data. This is called endianness and it refers to the ordering of the bytes. Specifically, little-endian is when the least significant bytes are stored before the more significant bytes, and big-endian is when the most significant bytes are stored before the less significant bytes.

8. How is Virtual Memory Translated to Physical Memory?

Memory is one of the most important host resources. For workloads to access global system memory, we need to make sure virtual memory addresses are mapped to the physical addresses. There are several components working together to perform these translations as efficient as possible. This blog post will cover the basics on how virtual memory addresses are translated.

9. Pointers in C Programming

The Pointer in C, is a variable that stores address of another variable. A pointer can also be used to refer to another pointer function. A pointer can be incremented/decremented, i.e., to point to the next/ previous memory location. The purpose of pointer is to save memory space and achieve faster execution time.

10. Does Linux Use Less RAM Than Windows?

It depends. Windows and Linux may not use RAM in exactly the same way, but they are ultimately doing the same thing. So which one uses less RAM?

Week 5

1. Demand Paging in OS

Demand paging is a process of swapping in the Virtual Memory system. In this process, all data is not moved from hard drive to main memory because while using this demand paging, when some programs are getting demand then data will be transferred.

2. Virtual and Physical Addresses

Physical addresses are provided by the hardware, Virtual (or logical) addresses are provided by the OS kernel. OS divides physical memory into partitions. Different partitions can have different sizes

3. What is virtual memory?

Virtual memory is a feature of an operating system that enables a computer to be able to compensate shortages of physical memory by transferring pages of data from random access memory to disk storage. This process is done temporarily and is designed to work as a combination of RAM and space on the hard disk. This means that when RAM runs low, virtual memory can move data from it to a space called a paging file. This process allows for RAM to be freed up so that a computer can complete the task.

4. How Big Should Your Page File or Swap Partition Be?

According to an old rule of thumb, your page file or swap should be “double your RAM” or “1.5x your RAM.” But do you really need a 32 GB page file or swap if you have 16 GB of RAM?. You probably don’t need that much page file or swap space, which is a relief considering a modern computer might have a solid-state drive with very little space.

5. How to create and activate a paging file on the Linux command line

As you learn what a sawp file is and does, you will learn how to create and activate one on your Linux instance. Armed with this knowledge, you will be able to ensure that your system will no longer run out of memory.

6. What are the Page Replacement Algorithms?

This lesson will introduce you to the concept of page replacement, which is used in memory management. You will understand the definition, need and various algorithms related to page replacement. A computer system has a limited amount of memory. Adding more memory physically is very costly. Therefore most modern computers use a combination of both hardware and software to allow the computer to address more memory than the amount physically present on the system. This extra memory is actually called Virtual Memory.

7. Page Replacement Algorithms in Operating Systems

In an operating system that uses paging for memory management, a page replacement algorithm is needed to decide which page needs to be replaced when new page comes in.

8. What is Thrash?

In computer science, thrash is the poor performance of a virtual memory (or paging) system when the same pages are being loaded repeatedly due to a lack of main memory to keep them in memory. Depending on the configuration and algorithm, the actual throughput of a system can degrade by multiple orders of magnitude.

In computer science, thrashing occurs when a computer’s virtual memory resources are overused, leading to a constant state of paging and page faults, inhibiting most application-level processing. It causes the performance of the computer to degrade or collapse. The situation can continue indefinitely until the user closes some running applications or the active processes free up additional virtual memory resources.

9. Copy-on-Write in Operating System

Copy-on-Write(CoW) is mainly a resource management technique that allows the parent and child process to share the same pages of the memory initially. If any process either parent or child modifies the shared page, only then the page is copied.

The CoW is basically a technique of efficiently copying the data resources in the computer system. In this case, if a unit of data is copied but is not modified then “copy” can mainly exist as a reference to the original data.

10. NUMA (non-uniform memory access)

NUMA (non-uniform memory access) is a method of configuring a cluster of microprocessor in a multiprocessing system so that they can share memory locally, improving performance and the ability of the system to be expanded. NUMA is used in a symmetric multiprocessing ( SMP ) system. An SMP system is a “tightly-coupled,” “share everything” system in which multiple processors working under a single operating system access each other’s memory over a common bus or “interconnect” path. Ordinarily, a limitation of SMP is that as microprocessors are added, the shared bus or data path get overloaded and becomes a performance bottleneck. NUMA adds an intermediate level of memory shared among a few microprocessors so that all data accesses don’t have to travel on the main bus.

Week 6

1. Fork() in OS

System call fork() is used to create processes. It takes no arguments and returns a process ID. The purpose of fork() is to create a new process, which becomes the child process of the caller. After a new child process is created, both processes will execute the next instruction following the fork() system call. Therefore, we have to distinguish the parent from the child.

2. Threads in Operating System

A thread is a single sequential flow of execution of tasks of a process so it is also known as thread of execution or thread of control. There is a way of thread execution inside the process of any operating system. Apart from this, there can be more than one thread inside a process. Each thread of the same process makes use of a separate program counter and a stack of activation records and control blocks. Thread is often referred to as a lightweight process.

3. Multithreading in OS

You are already aware of the term multitasking that allows processes to run concurrently. Similarly, multithreading allows sub-processes (threads) to run concurrently or parallelly. Also, we can say that when multiple threads run concurrently it is known as multithreading. Some widely used programming languages like Java and Python allow developers to work on threads in their program. In this blog, we will learn what are the various multithreading models and the benefits of multithreading in OS. So, let’s get started.

4. The exec family of system calls

The exec family of system calls replaces the program executed by a process. When a process calls exec, all code (text) and data in the process is lost and replaced with the executable of the new program. Although all data is replaced, all open file descriptors remains open after calling exec unless explicitly set to close-on-exec. In the below diagram a process is executing Program 1. The program calls exec to replace the program executed by the process to Program 2.

5. Using Makefiles

This web explain about USING MAKEFILES AND THE SECOND-STAGE BOOTLOADER.

Week 7

1. What is Semaphore?

Semaphore is simply a variable that is non-negative and shared between threads. A semaphore is a signaling mechanism, and a thread that is waiting on a semaphore can be signaled by another thread. It uses two atomic operations, 1)wait, and 2) signal for the process synchronization.

2. Introduction to DEADLOCK

Deadlock is a situation that occurs in OS when any process enters a waiting state because another waiting process is holding the demanded resource. Deadlock is a common problem in multi-processing where several processes share a specific type of mutually exclusive resource known as a soft lock or software.

3. What is a starvation problem in an operating system?

Starvation is the problem that occurs when low priority processes get jammed for an unspecified time as the high priority processes keep executing. A steady stream of higher-priority methods will stop a low-priority process from ever obtaining the processor. Starvation happens if a method is indefinitely delayed. This can emerge once a method needs a further resource for execution that isn’t assigned.

4. Difference between Deadlock and Starvation

Deadlock and starvation are conditions in which the processes requesting a resource have been delayed for a long time. However, deadlock and starvation are not the same things in many ways. Deadlock happens when every process holds a resource and waits for another process to hold another resource. In contrast, in starvation, the processes with high priorities continuously consume resources, preventing low priority processes from acquiring resources. In this article, you will learn the difference between deadlock and starvation. But before discussing the difference between deadlock and starvation, you must need to learn about deadlock and starvation.

5. Process Synchronization

In this tutorial, we will be covering the concept of Process synchronization in an Operating System.

Process Synchronization was introduced to handle problems that arose while multiple process executions.

Process is categorized into two types on the basis of synchronization, Independent Process and Cooperative Process.

6. Paterson Solution

Peterson’s algorithm (or Peterson’s solution) is a concurrent programming algorithm for mutual exclusion that allows two or more processes to share a single-use resource without conflict, using only shared memory for communication. It was formulated by Gary L. Peterson in 1981.[1] While Peterson’s original formulation worked with only two processes, the algorithm can be generalized for more than two.

7. The Critical Section Problem

Critical Section is the part of a program which tries to access shared resources. That resource may be any resource in a computer like a memory location, Data structure, CPU or any IO device. The critical section cannot be executed by more than one process at the same time; operating system faces the difficulties in allowing and disallowing the processes from entering the critical section. The critical section problem is used to design a set of protocols which can ensure that the Race condition among the processes will never arise. In order to synchronize the cooperative processes, our main task is to solve the critical section problem. We need to provide a solution in such a way that the following conditions can be satisfied.

8. Banker’s Algorithm

It is a banker algorithm used to avoid deadlock and allocate resources safely to each process in the computer system. The ‘S-State’ examines all possible tests or activities before deciding whether the allocation should be allowed to each process. It also helps the operating system to successfully share the resources between all the processes. The banker’s algorithm is named because it checks whether a person should be sanctioned a loan amount or not to help the bank system safely simulate allocation resources.

9. Mutex – Mutual Exclusion Object

In computer programming, a mutual exclusion object (mutex) is a program object that allows multiple program threads to share the same resource, such as file access, but not simultaneously. When a program is started, a mutex is created with a unique name. After this stage, any thread that needs the resource must lock the mutex from other threads while it is using the resource. The mutex is set to unlock when the data is no longer needed or the routine is finished.

10. Mutex vs Semaphore: What’s the Difference?

In this web, we can learn abaout Use of Semaphore and Mutex, Difference between Semaphore vs. Mutex, Common Facts about Mutex and Semaphore, Advantages and Disadvantage of Mutex and Semaphore.

Week 8

1. Process Scheduling

The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing.

2. Scheduling algorithms

A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling algorithms. There are six popular process scheduling algorithms which we are going to discuss in this chapter −

First-Come, First-Served (FCFS) Scheduling Shortest-Job-Next (SJN) Scheduling Priority Scheduling Shortest Remaining Time Round Robin(RR) Scheduling Multiple-Level Queues Scheduling

3. difference between Preemptive and Non-Preemptive scheduling

In the Operating System, the process scheduling algorithms can be divided into two broad categories i.e. Preemptive Scheduling and Non-Preemptive Scheduling. In this blog, we will learn the difference between these two.

4. Big O Notation Explained

In this article, we will have an in-depth discussion about Big O notation. We will start with an example algorithm to open up our understanding. Then, we will go into the mathematics a little bit to have a formal understanding. After that we will go over some common variations of Big O notation. In the end, we will discuss some of the limitations of Big O in a practical scenario. A table of contents can be found below.

5. Process State Models

When a process is first created by the OS, it initializes the program control block for the process and the new process enters the system in Not-running state. After some time, the currently running process will be interrupted by some events, and the OS will move the currently running process from Running state to Not-running state. The dispatcher then selects one process from Not-running processes and moves the process to the Running state for execution.

6. Multiple Processors Scheduling

Multiple processor scheduling or multiprocessor scheduling focuses on designing the system’s scheduling function, which consists of more than one processor. Multiple CPUs share the load (load sharing) in multiprocessor scheduling so that various processes run simultaneously. In general, multiprocessor scheduling is complex as compared to single processor scheduling. In the multiprocessor scheduling, there are many processors, and they are identical, and we can run any process at any time.

The multiple CPUs in the system are in close communication, which shares a common bus, memory, and other peripheral devices. So we can say that the system is tightly coupled. These systems are used when we want to process a bulk amount of data, and these systems are mainly used in satellite, weather forecasting, etc.

7. CPU Scheduling

Almost all programs have some alternating cycle of CPU number crunching and waiting for I/O of some kind. ( Even a simple fetch from memory takes a long time relative to CPU speeds. ) In a simple system running a single process, the time spent waiting for I/O is wasted, and those CPU cycles are lost forever. A scheduling system allows one process to use the CPU while another is waiting for I/O, thereby making full use of otherwise lost CPU cycles. The challenge is to make the overall system as “efficient” and “fair” as possible, subject to varying and often dynamic conditions, and where “efficient” and “fair” are somewhat subjective terms, often subject to shifting priority policies.

8. What is Burst time, Arrival time, Exit time, Response time, Waiting time, Turnaround time, and Throughput?

When we are dealing with some CPU scheduling algorithms then we encounter with some confusing terms like Burst time, Arrival time, Exit time, Waiting time, Response time, Turnaround time, and throughput. These parameters are used to find the performance of a system. So, in this blog, we will learn about these parameters. Let’s get started one by one.

9. Deadline Scheduling for Real-Time Systems

Real-time systems (RTS) are carefully designed systems consisting of software and hardware used to capture and respond to events occurring in the real world. Like many computer systems, the RTS process must operate correctly, but it has an additional requirement in that it must act in a timely manner. If an automobile’s RTS controlling your brakes do not act in a timely manner, it may cause a catastrophic event. To accomplish this goal, the design engineer must carefully select from a variety of real-time scheduling options available.

10. Comparing real-time scheduling on the Linux kernel and an RTOS)

By default, the Linux kernel build used in the many open source distributions is the normal/default kernel which doesn’t support real time scheduling. If an embedded developer wants to compare the scheduling policies of Linux to a real time operating system it is more useful to compare RTOS performance to a version of Linux that does have real-time features.

Fortunately, in addition to this default kernel, there is also available a Real-time kernel version that supports a real-time scheduling policy. In this article and in the code examples that are included, an effort is made to compare the real time operations of standard and real-time Linux with normal RTOS operation and evaluate the differences and similarities.

Week 9

1. Computer Storage Structure

Computer Storage contains many computer components that are used to store data. It is traditionally divided into primary storage, secondary storage and tertiary storage.

2. Bootloader: What you need to know about the system boot manager

A bootloader, also known as a boot program or bootstrap loader, is a special operating system software that loads into the working memory of a computer after start-up. For this purpose, immediately after a device starts, a bootloader is generally launched by a bootable medium like a hard drive, a CD/DVD or a USB stick. The boot medium receives information from the computer’s firmware (e.g. BIOS) about where the bootloader is. The whole process is also described as “booting”.

3. What Is The Difference Between Bootloader And Firmware?

Firmware: a small footprint software usually found in embedded devices. … Bootloader: part of the firmware usually ran during the boot sequence which allows to load a new firmware to update it from SPI, USB, CAN… Firmware assumes an intermediary role between the hardware and software – including potential future upgrades of the software. Some firmware (such as the BIOS on a PC) does the job of booting up a computer by initialising the hardware components and loading the operating system.

4. Systemd

systemd is a suite of basic building blocks for a Linux system. It provides a system and service manager that runs as PID 1 and starts the rest of the system.

systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux control groups, maintains mount and automount points, and implements an elaborate transactional dependency-based service control logic. systemd supports SysV and LSB init scripts and works as a replacement for sysvinit.

5. RAID (redundant array of independent disks)

RAID (redundant array of independent disks) is a way of storing the same data in different places on multiple hard disks or solid-state drives (SSDs) to protect data in the case of a drive failure. There are different RAID levels, however, and not all have the goal of providing redundancy.

6. What is UEFI and Legacy BIOS?UEFI vs Legacy BIOS?

BIOS is an important part of the CPU. When you power on your computer at first the Processor starts doing its work, and at first, the processor calls BIOS firmware, and BIOS gets activated and BIOS will do POST checked to initialize and identify the hardware like hard disks, RAM, peripheral devices, GPU, DMA Controllers and many more. And if all is ok then BIOS will load the first sector of each storage device and loads in memory and scans for valid MBR. If the master boot record(MBR) is found then it will execute the boot loader low-level code present in MBR, which allows the user to select a partition to boot from. If not found first, it proceeds to the next device in the boot order (set in BIOS).

UEFI stands for Unified Extensible Firmware Interface. It is the successor of BIOS, not a replacement. It is user-friendly GUI-based BIOS. The task of both BIOS and UEFI are the same and the main difference lies in the location of the firmware code, how they prepare the system before handling to OS, and what convenience it offers for calling the code while the system is running. in addition to that, it also has advance and features like a secure boot which will discuss later.

7. GRUB: The Grand Unified Bootloader

Like LILO, the GRUB boot loader can load other operating systems in addition to Linux. GRUB was written by Erich Boleyn to boot operating systems on PC-based hardware, and is now developed and maintained by the GNU project. GRUB was intended to boot operating systems that conform to the Multiboot Specification, which was designed to create one booting method that would work on any conforming PC-based operating system. In addition to multiboot-conforming systems, GRUB can boot directly to Linux, FreeBSD, OpenBSD, and NetBSD. It can also boot other operating systems such as Microsoft Windows indirectly, through the use of a chainloader . The chainloader loads an intermediate file, and that file loads the operating system’s boot loader.

8. The Boot Process, Init, and Shutdown

When a computer is booted, the processor looks at the end of the system memory for the BIOS (Basic Input/Output System) and runs it. The BIOS program is written into read-only permanent memory, and is always ready to go. The BIOS provides the lowest level interface to peripheral devices and controls the first step of the boot process.

The BIOS tests the system, looks for and checks peripherals and then looks for a drive to boot from. Usually, it checks the floppy drive (or CD-ROM drive on many newer systems), if present, and then it looks on the hard drive. On the hard drive, the BIOS looks for a Master Boot Record (MBR) starting at the first sector on the first hard drive and starts the MBR running.

The MBR looks for the first active partition and reads the partition’s boot record. The boot record contains instructions on how to load the boot loader, LILO (LInux LOader). The MBR then loads LILO and LILO takes over the process.

LILO reads the file /etc/lilo.conf, which spells out which operating system(s) to configure or which kernel to start and where to install itself (for example, /dev/hda for your hard drive). LILO displays a LILO: prompt on the screen and waits for a preset period of time (also set in the lilo.conf file) for input from the user. If your lilo.conf is set to give LILO a choice of operating systems, at this time you could type in the label for whichever OS you wanted to boot.

After waiting for a set period of time (five seconds is common), LILO proceeds to boot whichever operating system appears first in the lilo.conf file.

If LILO is booting Linux, it first boots the kernel, which is a vmlinuz file (plus a version number, for example, vmlinuz-2.2.15-xx) located in the /boot directory. Then the kernel takes over.

9. The Upstart Event System

Designed with flexibility from the beginning, the Upstart event system utilizes a variety of concepts that differ from conventional initialization systems. The solution is installed by default on Red Hat Enterprise Linux (RHEL) 6, as well as Google’s Chrome OS, and Ubuntu, although recent debate has caused confusion over whether this will continue.

10. What is Systemctl in Linux

The systemctl command is a utility which is responsible for examining and controlling the systemd system and service manager. It is a collection of system management libraries, utilities and daemons which function as a successor to the System V init daemon. Systemctl is used to examine and control the state of “systemd” system and service manager. systemd is system and service manager for Unix like operating systems(most of the distributions, not all).

Week 10

1. I/O

One of the important jobs of an Operating System is to manage various I/O devices including mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-mapped screen, LED, Analog-to-digital converter, On/off switch, network connections, audio I/O, printers etc.

An I/O system is required to take an application I/O request and send it to the physical device, then take whatever response comes back from the device and send it to the application. I/O devices can be divided into two categories

2. Platform Controller Hub

The PCH controls certain data paths and support functions used in conjunction with Intel CPUs. These include clocking (the system clock), Flexible Display Interface (FDI) and Direct Media Interface (DMI), although FDI is used only when the chipset is required to support a processor with integrated graphics. As such, I/O functions are reassigned between this new central hub and the CPU compared to the previous architecture: some northbridge functions, the memory controller and PCI-e lanes, were integrated into the CPU while the PCH took over the remaining functions in addition to the traditional roles of the southbridge. AMD has its equivalent for the PCH, known simply as a chipset, no longer using the previous term Fusion controller hub since the release of the Zen architecture in 2017.

3. I/O Controller Hub

I/O Controller Hub (ICH) is a family of Intel southbridge microchips used to manage data communications between a CPU and a motherboard, specifically Intel chipsets based on the Intel Hub Architecture. It is designed to be paired with a second support chip known as a northbridge. As with any other southbridge, the ICH is used to connect and control peripheral devices.

As CPU speeds increased data transmission between the CPU and support chipset, the support chipset eventually emerged as a bottleneck between the processor and the motherboard. Accordingly, starting with the Intel 5 Series, a new architecture was used that incorporated some functions of the traditional north and south bridge chips onto the CPU itself, with the remaining functions being consolidated into a single Platform Controller Hub (PCH). This replaces the traditional two chip setup.

4. Socket

A socket is a software object that acts as an end point establishing a bidirectional network communication link between a server-side and a client-side program. In UNIX, a socket can also be referred to as an endpoint for interprocess communication(IPC) within the operating system(OS). In Java, socket classes represent the communication between client and server programs. Socket classes handle client-side communication, and server socket classes handle server-side communication

5. Processor Sockets – Intel and AMD Socket Types)

The pace at which the socket types have been changing has slowed down considerably since the heyday of the desktop PC, which is in decline compared to tablet PCs and smartphones, now enjoying their heyday. Desktop and laptop PCs becoming redundant any time soon. I would much rather have a laptop PC than any tablet and I use a desktop PC at home. Laptops are still relatively expensive. If I want to upgrade my desktop PC to the latest hardware, I just have to back up my files and go online and buy a motherboard, processor and RAM bundle costing about £150. Then it’s just a question of reinstalling Windows, my software and restoring my files. Microsoft’s ISO download of Windows 10 is always right up to date. Gone are the days when you had to create a ‘slipstreamed’ install disc that added Service Packs to an earlier Windows install disc.

6. Kernel I/O Subsystem

Kernel menyediakan banyak service yang berhubungan dengan I/O. Pada bagian ini, kita akan mendeskripsikan beberapa service yang disediakan oleh kernel I/O subsystem, dan kita akan membahas bagaimana caranya membuat infrastruktur hardware dan device-driver. Service yang akan kita bahas adalah I/O scheduling, buffering, caching, spooling, reservasi device, error handling.

7. STREAMS

The streams mechanism in UNIX provides a bi-directional pipeline between a user process and a device driver, onto which additional modules can be added. The user process interacts with the stream head. The device driver interacts with the device end. Zero or more stream modules can be pushed onto the stream, using ioctl( ). These modules may filter and/or modify the data as it passes through the stream.

8. Non-Maskable Interrupt

A non-maskable interrupt (NMI) is a type of hardware interrupt (or signal to the processor) that prioritizes a certain thread or process. Unlike other types of interrupts, the non-maskable interrupt cannot be ignored through the use of interrupt masking techniques.

9. Difference Between Maskable and Non-Maskable Interrupt

The term interrupt refers to an event that any component of a computer causes (other than its CPU). Interrupt designates as the CPU of any event that is external and requires immediate attention from the system. The occurrence of Interrupts is asynchronous. There are two basic types of interrupts, namely non-maskable and maskable interrupts. A Maskable Interrupt is the one that is capable of ignoring/ enabling the instructions of the system’s CPU alone. One can easily trigger such interrupts in two major ways. Thus, they are either level-triggered or edge-triggered. The type of interrupt that the instructions of a system’s CPU can easily ignore or disable is known as Non-Maskable Interrupt. Such a type of interrupt usually comes into play when the response time is very critical.

10. How PCI Works

The power and speed of computer components has increased at a steady rate since desktop computers were first developed decades ago. Software makers create new applications capable of utilizing the latest advances in processor speed and hard drive capacity, while hardware makers rush to improve components and design new technologies to keep up with the demands of high-end software.