105. Scheduling is done so as to __
a) increase the waiting time
b) keep the waiting time the same
c) decrease the waiting time
d) none of the mentioned
Answer: c
Explanation: None.
a) increase the waiting time
b) keep the waiting time the same
c) decrease the waiting time
d) none of the mentioned
Answer: c
Explanation: None.
a) the total time in the blocked and waiting queues
b) the total time spent in the ready queue
c) the total time spent in the running queue
d) the total time from the completion till the submission of a process
Answer: b
Explanation: None.
a) increase the turnaround time
b) decrease the turnaround time
c) keep the turnaround time same
d) there is no relation between scheduling and turnaround time
Answer: b
Explanation: None.
a) the total waiting time for a process to finish execution
b) the total time spent in the ready queue
c) the total time spent in the running queue
d) the total time from the completion till the submission of a process
Answer: d
Explanation: None.
a) increase the throughput
b) decrease the throughput
c) increase the duration of a specific amount of work
d) none of the mentioned
Answer: a
Explanation: None.
a) increase CPU utilization
b) decrease CPU utilization
c) keep the CPU more idle
d) none of the mentioned
Answer: a
Explanation: None.
a) the speed of dispatching a process from running to the ready state
b) the time of dispatching a process from running to ready state and keeping the CPU idle
c) the time to stop one process and start running another one
d) none of the mentioned
Answer: c
Explanation: None.
a) process switch
b) task switch
c) context switch
d) all of the mentioned
Answer:d
Explanation: None.
a) When a process switches from the running state to the ready state
b) When a process goes from the running state to the waiting state
c) When a process switches from the waiting state to the ready state
d) All of the mentioned
Answer: b
Explanation: There is no other choice.
a) blocked, short term
b) wait, long term
c) ready, short term
d) ready, long term
Answer: c
Explanation: None.
a) a few very short CPU bursts
b) many very short I/O bursts
c) many very short CPU bursts
d) a few very short I/O bursts
Answer: c
Explanation: None.
a) I/O & OS Burst
b) CPU & I/O Burst
c) Memory & I/O Burst
d) OS & Memory Burst
Answer: b
Explanation: None.
a) time
b) space
c) money
d) all of the mentioned
Answer: a
Explanation: None.
a) multiprocessor systems
b) multiprogramming operating systems
c) larger memory sized systems
d) none of the mentioned
Answer: b
Explanation: None.
a) a process can move to a different classified ready queue
b) classification of ready queue is permanent
c) processes are not classified into groups
d) none of the mentioned
Answer: a
Explanation: None.
a) shortest job scheduling algorithm
b) round robin scheduling algorithm
c) priority scheduling algorithm
d) multilevel queue scheduling algorithm
Answer: d
Explanation: None.
a) shortest job scheduling algorithm
b) round robin scheduling algorithm
c) priority scheduling algorithm
d) multilevel queue scheduling algorithm
Answer: b
Explanation: None.
a) all process
b) currently running process
c) parent process
d) init process
Answer: b
Explanation: None.
a) CPU is allocated to the process with highest priority
b) CPU is allocated to the process with lowest priority
c) Equal priority processes can not be scheduled
d) None of the mentioned
Answer: a
Explanation: None.
a) first-come, first-served scheduling
b) shortest job scheduling
c) priority scheduling
d) none of the mentioned
Answer: a
Explanation: None.
a) waiting time
b) turnaround time
c) response time
d) throughput
Answer: b
Explanation: None.
a) job queue
b) ready queue
c) execution queue
d) process queue
Answer: b
Explanation: None.
a) dispatcher
b) interrupt
c) scheduler
d) none of the mentioned
Answer: a
Explanation: None.
a) Sending signals to CPU through a system bus
b) Executing a special program called interrupt program
c) Executing a special program called system program
d) Executing a special operation called system call
Answer: a
Explanation: None.
a) Bottom Layer(0) is the User interface
b) Highest Layer(N) is the User interface
c) Bottom Layer(N) is the hardware
d) Highest Layer(N) is the hardware
Answer: b
Explanation: None.
a) the CPU uses polling to watch the control bit constantly, looping to see if a device is ready
b) the CPU writes one data byte to the data register and sets a bit in control register to show that a byte is available
c) the CPU receives an interrupt when the device is ready for the next byte
d) the CPU runs a user written code and does accordingly
Answer: c
Explanation: None.
a) the CPU uses polling to watch the control bit constantly, looping to see if a device is ready
b) the CPU writes one data byte to the data register and sets a bit in control register to show that a byte is available
c) the CPU receives an interrupt when the device is ready for the next byte
d) the CPU runs a user written code and does accordingly
Answer: a
Explanation: None.
a) the CPU uses polling to watch the control bit constantly, looping to see if a device is ready
b) the CPU writes one data byte to the data register and sets a bit in control register to show that a byte is available
c) the CPU receives an interrupt when the device is ready for the next byte
d) the CPU runs a user written code and does accordingly
Answer: b
Explanation: None.
a) High speed devices(disks and communications network)
b) Low speed devices
c) Utilizing CPU cycles
d) All of the mentioned
Answer: a
Explanation: None.
a) It is an address that is indexed to an interrupt handler
b) It is a unique device number that is indexed by an address
c) It is a unique identity given to an interrupt
d) None of the mentioned
Answer: a
Explanation: None.
a) Information Service Request
b) Interrupt Service Request
c) Interrupt Service Routine
d) Information Service Routine
Answer: c
Explanation: None.
a) hardware generated interrupt caused by an error
b) software generated interrupt caused by an error
c) user generated interrupt caused by an error
d) none of the mentioned
Answer: b
Explanation: None.
a) Sending signals to CPU through bus
b) Executing a special operation called system call
c) Executing a special program called system program
d) Executing a special program called interrupt trigger program
Answer: b
Explanation: None.
a) boot program
b) bootloader
c) initializer
d) bootstrap program
Answer: d
Explanation: None.
a) allows a process to invoke memory on a remote object
b) allows a thread to invoke a method on a remote object
c) allows a thread to invoke memory on a remote object
d) allows a process to invoke a method on a remote object
Answer: b
Explanation: None.
a) Remote Memory Installation
b) Remote Memory Invocation
c) Remote Method Installation
d) Remote Method Invocation
Answer: d
Explanation: None.
a) machine dependent representation of data
b) machine representation of data
c) machine-independent representation of data
d) none of the mentioned
Answer: c
Explanation: None.
a) transmits the message to the server where the server side stub receives the message and invokes procedure on the server side
b) packs the parameters into a form transmittable over the network
c) locates the port on the server
d) all of the mentioned
Answer: d
Explanation: None.
a) stub
b) identifier
c) name
d) process identifier
Answer: a
Explanation: None.
a) Variables
b) Sockets
c) Ports
d) Service names
Answer: c
Explanation: None.
a) for communication between two processes remotely different from each other on the same system
b) for communication between two processes on the same system
c) for communication between two processes on separate systems
d) none of the mentioned
Answer: c
Explanation: None.
a) is referred to as a message system with buffering
b) is referred to as a message system with no buffering
c) is referred to as a link
d) none of the mentioned
Answer: b
Explanation: The Zero capacity queue is referred to as a message system with no buffering. Zero capacity queue has maximum capacity of Zero; thus message queue does not have any waiting message in it.
a) the queue can store at least one message
b) the sender blocks until the receiver receives the message
c) the sender keeps sending and the messages don’t wait in the queue
d) none of the mentioned
Answer: b
Explanation: In the Zero capacity queue the sender blocks until the receiver receives the message. Zero capacity queue has maximum capacity of Zero; thus message queue does not have any waiting message in it.
a) the sending process keeps sending until the message is received
b) the sending process sends the message and resumes operation
c) the sending process keeps sending until it receives a message
d) none of the mentioned
Answer: b
Explanation: In the non blocking send, the sending process sends the message and resumes operation. Sending process doesn’t care about reception. It is also known as asynchronous send.
a) there is another process R to handle and pass on the messages between P and Q
b) there is another machine between the two processes to help communication
c) there is a mailbox to help communication between P and Q
d) none of the mentioned
Answer: c
Explanation: In indirect communication between processes P and Q there is a mailbox to help communication between P and Q. A mailbox can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed.
a) A communication link can be associated with N number of process(N = max. number of processes supported by system)
b) A communication link is associated with exactly two processes
c) Exactly N/2 links exist between each pair of processes(N = max. number of processes supported by system)
d) Exactly two link exists between each pair of processes
Answer: b
Explanation: For direct communication, a communication link is associated with exactly two processes. One communication link must exist between a pair of processes.
a) communication link
b) message-passing link
c) synchronization link
d) all of the mentioned
Answer: a
Explanation: The link between two processes P and Q to send and receive messages is called communication link. Two processes P and Q want to communicate with each other; there should be a communication link that must exist between these two processes so that both processes can able to send and receive messages using that link.
a) have to be of a fixed size
b) have to be a variable size
c) can be fixed or variable sized
d) none of the mentioned
Answer: c
Explanation: Messages sent by a process can be fixed or variable size. If the message size of the process is fixed then system level implementation is straightforward but it makes the task of programming more difficult. If the message size of the process is variable then system level implementation is more complex but it makes the task of programming simpler.
a) write & delete message
b) delete & receive message
c) send & delete message
d) receive & send message
Answer: d
Explanation: Two operations provided by the IPC facility are receive and send messages. Exchange of data takes place in cooperating processes.
a) communicate with each other without sharing the same address space
b) communicate with one another by resorting to shared data
c) share data
d) name the recipient or sender of the message
Answer: a
Explanation: Message Passing system allows processes to communicate with each other without sharing the same address space.
a) allows processes to communicate and synchronize their actions when using the same address space
b) allows processes to communicate and synchronize their actions
c) allows the processes to only synchronize their actions without communication
d) none of the mentioned
Answer: b
Explanation: Interprocess Communication allows processes to communicate and synchronize their actions. Interprocess Communication (IPC) mechanism is used by cooperating processes to exchange data and information.
a) be a duplicate of the parent process
b) never be a duplicate of the parent process
c) cannot have another program loaded into it
d) never have another program loaded into it
Answer: a
Explanation: The child process can be a duplicate of the parent process. The child process created by fork consists of a copy of the address space of the parent process.
a) A Negative integer, Zero
b) Zero, A Negative integer
c) Zero, A nonzero integer
d) A nonzero integer, Zero
Answer: c
Explanation: In Unix, the return value of the fork system call is Zero for the child process and Non-zero value for parent process. A fork system call returns the PID of a newly created (child) process to the parent and returns Zero to that newly created (child) process.
a) Process Control Block
b) Device Queue
c) Process Identifier
d) None of the mentioned
Answer: c
Explanation: In Unix, each process is identified by its Process Identifier or PID. The PID provides unique value to each process in the system so that each process can be identified uniquely.
a) Multiprocessing, Multi programming
b) Multi programming, Uni processing
c) Multi programming, Multi processing
d) Uni programming, Multi processing
Answer: d
Explanation: With Uniprogramming only one process can execute at a time; meanwhile all other processes are waiting for the processor. With Multiprocessing more than one process can run simultaneously each on different processors. The Uniprogramming system has only one program inside the core while the Multiprocessing system has multiple processes inside multiple cores. The core is one which executes instructions and stores data locally into registers.
a) Normally
b) Abnormally
c) Normally or abnormally
d) None of the mentioned
Answer: c
Explanation: Cascading termination refers to termination of all child processes if the parent process terminates Normally or Abnormally. Some systems don’t allow child processes to exist if the parent process has terminated. Cascading termination is normally initiated by the operating system.
a) wait
b) fork
c) exit
d) exec
Answer: a
Explanation: A parent process calling wait system call will be suspended until children processes terminate. A parameter is passed to wait system call which will obtain exit status of child as well as wait system call returns PID of terminated process.
a) overloading the system by using a lot of secondary storage
b) under-loading the system by very less CPU utilization
c) overloading the system by creating a lot of sub-processes
d) crashing the system by utilizing multiple resources
Answer: c
Explanation: Restricting the child process to a subset of the parent’s resources prevents any process from overloading the system by creating a lot of sub-processes. A process creates a child process, child process requires certain resources to complete its task. A child process can demand required resources directly from the system, but by doing this system will be overloaded. So to avoid overloading of the system, the parent process shares its resources among children.
a) shared data structures
b) procedures that operate on shared data structure
c) synchronization between concurrent procedure invocation
d) all of the mentioned
Answer: d
Explanation: A monitor is a module that encapsulates shared data structures, procedures that operate on shared data structure, synchronization between concurrent procedure invocation.
a) hardware level
b) software level
c) both hardware and software level
d) none of the mentioned
Answer: c
Explanation: Process synchronization can be done on both hardware and software level. Critical section problems can be resolved using hardware synchronization. But this method is not simple for implementation so software synchronization is mostly used.
a) priority inversion
b) priority removal
c) priority exchange
d) priority modification
Answer: a
Explanation: When a high priority task is indirectly preempted by a medium priority task effectively inverting the relative priority of the two tasks, the scenario is called priority inversion.
a) mutex locks
b) binary semaphores
c) both mutex locks and binary semaphores
d) none of the mentioned
Answer: c
Explanation: Mutual exclusion can be provided by both mutex locks and binary semaphore. Mutex is a short form of Mutual Exclusion. Binary semaphore also provides a mechanism for mutual exclusion. Binary semaphore behaves similar to mutex locks.
a) that can not drop below zero
b) that can not be more than zero
c) that can not drop below one
d) that can not be more than one
Answer: a
Explanation: A semaphore is a shared integer variable that can not drop below zero. In binary semaphore, if the value of the semaphore variable is zero that means there is a process that uses a critical resource and no other process can access the same critical resource until it is released. In Counting semaphore, if the value of the semaphore variable is zero that means there is no resource available.
a) thread
b) pipe
c) semaphore
d) socket
Answer: c
Explanation: Semaphore is a synchronization tool. Semaphore is a mechanism which synchronizes or controls access of threads on critical resources. There are two types of semaphores i) Binary Semaphore ii) Counting Semaphore.
a) mutual exclusion
b) critical exclusion
c) synchronous exclusion
d) asynchronous exclusion
Answer: a
Explanation: If a process is executing in its critical section, then no other processes can be executed in their critical section. This condition is called Mutual Exclusion. Critical section of the process is shared between multiple processes. If this section is executed by more than one or all of them concurrently then the outcome of this is not as per desired outcome. For this reason the critical section of the process should not be executed concurrently.
a) dynamic condition
b) race condition
c) essential condition
d) critical condition
Answer: b
Explanation: When several processes access the same data concurrently and the outcome of the execution depends on the particular order in which access takes place is called race condition.
a) cooperating process
b) child process
c) parent process
d) init process
Answer: a
Explanation: A cooperating process can be affected by other processes executing in the system. Also it can affect other processes executing in the system. A process shares data with other processes, such a process is known as a cooperating process.
a) General purpose registers
b) Translation lookaside buffer
c) Program counter
d) All of the mentioned
Answer: b
Explanation: Translation Look-aside Buffer (TLB) need not necessarily be saved on a context switch between processes. A special, small, fast-lookup hardware cache is called Translation Look-aside Buffer. TLB used to reduce memory access time.
a) the value of the CPU registers
b) the process state
c) memory-management information
d) context switch time
Answer: d
Explanation: The context of a process in the PCB of a process does not contain context switch time. When switching CPU from one process to another, the current context of the process needs to be saved. It includes values of the CPU registers, process states, memory-management information.
a) Running state
b) Ready state
c) Suspended state
d) Terminated state
Answer: b
Explanation: Suppose that a process is in “Blocked” state waiting for some I/O service. When the service is completed, it goes to the ready state. Process never goes directly to the running state from the waiting state. Only processes which are in ready state go to the running state whenever CPU allocated by operating system.
a) the processor executes more than one process at a time
b) the programs are developed by more than one person
c) more than one process resides in the memory
d) a single user can execute many programs at the same time
Answer: c
Explanation: In a multiprogramming environment more than one process resides in the memory. Whenever a CPU is available, one process amongst all present in memory gets the CPU for execution. Multiprogramming increases CPU utilization.
a) Blocked state
b) Ready state
c) Suspended state
d) Terminated state
Answer: b
Explanation: In a time-sharing operating system, when the time slot given to a process is completed, the process goes from the running state to the Ready State. In a time-sharing operating system unit time is defined for sharing CPU, it is called a time quantum or time slice. If a process takes less than 1 time quantum, then the process itself releases the CPU.
a) block
b) wakeup
c) dispatch
d) none of the mentioned
Answer: a
Explanation: The only state transition that is initiated by the user process itself is block. Whenever a user process initiates an I/O request it goes into block state unless and until the I/O request is not completed.
a) The length of their queues
b) The type of processes they schedule
c) The frequency of their execution
d) None of the mentioned
Answer: c
Explanation: The primary distinction between the short-term scheduler and the long-term scheduler is the frequency of their execution. The short-term scheduler executes frequently while the long-term scheduler executes much less frequently.
a) It selects which process has to be brought into the ready queue
b) It selects which process has to be executed next and allocates CPU
c) It selects which process to remove from memory by swapping
d) None of the mentioned
Answer: b
Explanation: A short-term scheduler selects a process which has to be executed next and allocates CPU. Short-term scheduler selects a process from the ready queue. It selects processes frequently.
a) It selects which process has to be brought into the ready queue
b) It selects which process has to be executed next and allocates CPU
c) It selects which process to remove from memory by swapping
d) None of the mentioned
Answer: c
Explanation: A medium-term scheduler selects which process to remove from memory by swapping. The medium-term scheduler swapped out the process and later swapped in. Swapping helps to free up memory.
a) full, little
b) full, lot
c) empty, little
d) empty, lot
Answer: c
Explanation: If all processes are I/O bound, the ready queue will almost empty and the short-term scheduler will have a little to do. I/O bound processes spend more time doing I/O than computation.
a) It selects processes which have to be brought into the ready queue
b) It selects processes which have to be executed next and allocates CPU
c) It selects processes which heave to remove from memory by swapping
d) None of the mentioned
Answer: a
Explanation: A long-term scheduler selects processes which have to be brought into the ready queue. When processes enter the system, they are put in the job queue. Long-term scheduler selects processes from the job queue and puts them in the ready queue. It is also known as Job Scheduler.
a) It is removed from all queues
b) It is removed from all, but the job queue
c) Its process control block is de-allocated
d) Its process control block is never de-allocated
Answer: a
Explanation: When a process terminates, it removes from all queues. All allocated resources to that particular process are deallocated and all those resources are returned back to OS.
a) It is placed in an I/O queue
b) It is placed in a waiting queue
c) It is placed in the ready queue
d) It is placed in the Job queue
Answer: a
Explanation: When the process issues an I/O request it is placed in an I/O queue. I/O is a resource and it should be used effectively and every process should get access to it. There might be multiple processes which requested for I/O. Depending on scheduling algorithm I/O is allocated to any particular process and after completing I/O operation, I/O access is returned to the OS.
a) Job Queue
b) PCB queue
c) Device Queue
d) Ready Queue
Answer: b
Explanation: PCB queue does not belong to queues for processes. PCB is a process control block which contains information related to process. Each process is represented by PCB.
a) only one task at a time
b) multiple tasks at a time
c) only two tasks at a time
d) all of the mentioned
Answer: a
Explanation: A single thread of control allows the process to perform only one task at a time. In the case of multi-core, multiple threads can be run simultaneously and can perform multiple tasks at a time.
a) the number of processes executed per unit time
b) the number of processes in the ready queue
c) the number of processes in the I/O queue
d) the number of processes in memory
Answer: d
Explanation: Multiprogramming means the number of processes are in the ready states. To increase utilization of CPU, Multiprogramming is one of the most important abilities of OS. Generally, a single process cannot use CPU or I/O at all time, whenever CPU or I/O is available another process can use it. By doing this CPU utilization is increased.
a) Process Register
b) Program Counter
c) Process Table
d) Process Unit
Answer: c
Explanation: The entry of all the PCBs of the current processes is in Process Table. The Process Table has the status of each and every process that is created in OS along with their PIDs.
a) Process type variable
b) Data Structure
c) A secondary storage section
d) A Block in memory
Answer: b
Explanation: A Process Control Block (PCB) is a data structure. It contains information related to a process such as Process State, Program Counter, CPU Register, etc. Process Control Block is also known as Task Control Block.
a) New
b) Old
c) Waiting
d) Running
Answer: b
Explanation: There is no process state such as old. When a process is created then the process is in New state. When the process gets the CPU for its execution then the process is in Running state. When the process is waiting for an external event then the process is in a Waiting state.
a) the final activity of the process
b) the activity just executed by the process
c) the activity to next be executed by the process
d) the current activity of the process
Answer: d
Explanation: The state of a process is defined by the current activity of the process. A process state changes when the process executes. The process states are as New, Ready, Running, Wait, Terminated.
a) Output
b) Throughput
c) Efficiency
d) Capacity
Answer: b
Explanation: The number of processes completed per unit time is known as Throughput. Suppose there are 4 processes A, B, C & D they are taking 1, 3, 4 & 7 units of time respectively for their executions. For 10 units of time, throughput is high if process A, B & C are running first as 3 processes can execute. If process C runs first then throughput is low as maximum only 2 processes can execute. Throughput is low for processes which take a long time for execution. Throughput is high for processes which take a short time for execution.
a) Code
b) Stack
c) Bootstrap program
d) Data
Answer: c
Explanation: Process Control Block (PCB) contains information related to a process such as Process State, Program Counter, CPU Register, etc. Process Control Block is also known as Task Control Block. Bootstrap program is a program which runs initially when the system or computer is booted or rebooted.
a) wait
b) exit
c) fork
d) get
Answer: a
Explanation: wait() system call is used by the parent process to determine termination of child process. The parent process uses wait() system call and gets the exit status of the child process as well as the pid of the child process which is terminated.
a) Function parameters
b) Local variables
c) Return addresses
d) PID of child process
Answer: d
Explanation: Process stack contains Function parameters, Local variables and Return address. It does not contain the PID of child process.
a) each process is blocked and will remain so forever
b) each process is terminated
c) all processes are trying to kill each other
d) none of the mentioned
Answer: a
Explanation: Deadlock is a situation which occurs because process A is waiting for one resource and holds another resource (blocking resource). At the same time another process B demands blocking a resource as it is already held by a process A, process B is waiting state unless and until process A releases occupied resource.
a) communication within the process
b) communication between two process
c) communication between two threads of same process
d) none of the mentioned
Answer: b
Explanation: Interprocess Communication (IPC) is a communication mechanism that allows processes to communicate with each other and synchronise their actions without using the same address space. IPC can be achieved using shared memory and message passing.
a) when process is scheduled to run after some execution
b) when process is unable to run until some task has been completed
c) when process is using the CPU
d) none of the mentioned
Answer: a
Explanation: Ready state of the process means process has all necessary resources which are required for execution of that process when CPU is allocated. Process is ready for execution but waiting for the CPU to be allocated.
a) normal exit
b) fatal error
c) killed by another process
d) all of the mentioned
Answer: d
Explanation: A process can be terminated normally by completing its task or because of fatal error or killed by another process or forcefully killed by a user. When the process completes its task without any error then it exits normally. The process may exit abnormally because of the occurrence of fatal error while it is running. The process can be killed or terminated forcefully by another process.
a) fork
b) create
c) new
d) none of the mentioned
Answer: a
Explanation: In UNIX, a new process is created by fork() system call. fork() system call returns a process ID which is generally the process id of the child process created.
a) address space and global variables
b) open files
c) pending alarms, signals and signal handlers
d) all of the mentioned
Answer: d
Explanation: In Operating Systems, each process has its own address space which contains code, data, stack and heap segments or sections. Each process also has a list of files which is opened by the process as well as all pending alarms, signals and various signal handlers.
a) uniprogramming systems
b) uniprocessing systems
c) unitasking systems
d) none of the mentioned
Answer: b
Explanation: Those systems which allow more than one process execution at a time, are called multiprocessing systems. Uniprocessing means only one processor.
a) monolithic kernel
b) hybrid kernel
c) microkernel
d) monolithic kernel with modules
Answer: d
Explanation: OS X has a hybrid kernel. Hybrid kernel is a combination of two different kernels. OS X is developed by Apple and originally it is known as Mac OS X.
a) VxWorks
b) QNX
c) RTLinux
d) Palm OS
Answer: d
Explanation: VxWorks, QNX & RTLinux are real-time operating systems. Palm OS is a mobile operating system. Palm OS is developed for Personal Digital Assistants (PDAs).
a) DTrace
b) DLocate
c) DMap
d) DAdd
Answer: a
Explanation: A facility that dynamically adds probes to a running system, both in user process and in the kernel is called DTrace. This is very much useful in troubleshooting kernels in real-time.
a) log file
b) another running process
c) new file
d) none of the mentioned
Answer: a
Explanation: If a process fails, most operating systems write the error information to a log file. Log file is examined by the debugger, to find out what is the actual cause of that particular problem. Log file is useful for system programmers for correcting errors.
a) Round Robin
b) Shortest Job First
c) Priority
d) All of the mentioned
Answer: d
Explanation: In Operating Systems, CPU scheduling algorithms are:
i) First Come First Served scheduling
ii) Shortest Job First scheduling
iii) Priority scheduling
iv) Round Robin scheduling
v) Multilevel Queue scheduling
vi) Multilevel Feedback Queue scheduling
All of these scheduling algorithms have their own advantages and disadvantages.
a) to get and execute the next user-specified command
b) to provide the interface between the API and application program
c) to handle the files in operating system
d) none of the mentioned
Answer: a
Explanation: The main function of command interpreter is to get and execute the next user-specified command. Command Interpreter checks for valid command and then runs that command else it will throw an error.
a) power failure
b) lack of paper in printer
c) connection failure in the network
d) all of the mentioned
Answer: d
Explanation: All the mentioned errors are handled by OS. The OS is continuously monitoring all of its resources. Also, the OS is constantly detecting and correcting errors.
a) kernel is the program that constitutes the central core of the operating system
b) kernel is the first part of operating system to load into memory during booting
c) kernel is made of various modules which can not be loaded in running operating system
d) kernel remains in the memory during the entire computer session
Answer: c
Explanation: Kernel is the first program which is loaded in memory when OS is loading as well as it remains in memory till OS is running. Kernel is the core part of the OS which is responsible for managing resources, allowing multiple processes to use the resources and provide services to various processes. Kernel modules can be loaded and unloaded in run-time i.e. in running OS.
a) System calls
b) API
c) Library
d) Assembly instructions
Answer: a
Explanation: To access services of the Operating System an interface is provided by the System Calls. Generally, these are functions written in C and C++. Open, Close, Read, Write are some of most prominently used system calls.
a) collection of programs that manages hardware resources
b) system service provider to the application programs
c) interface between the hardware and application programs
d) all of the mentioned
Answer: d
Explanation: An Operating System acts as an intermediary between user/user applications/application programs and hardware. It is a program that manages hardware resources. It provides services to application programs.