KTU Operating Systems 2019 Model Question Paper Solved Answer key


KTU Operating Systems 2019 Model Question Paper Solved Answer key

OS is a KTU 2019 Scheme course for S4 CSE students. Why should we solve the model question paper? Here we provide the solved answer key for the Model question paper provided in the syllabus. Operating System is introduced in the 2019 scheme of KTU. So it is important to solve the model questions in the new pattern. CST200 begins with the basics of operating systems. Here we present model questions that cover many topics related to the field of operating systems. 
An operating system serves as a bridge between a computer's user and its hardware. An operating system's function is to offer a setting in which a user can conveniently and effectively run programs. A piece of software called an operating system controls computer hardware. In order to guarantee the proper operation of the computer system and prevent user programs from interfering with that operation, the hardware must offer the necessary mechanisms. 

Before moving to the Answer key we can understand the main aim of this course is to enhance the operating system's features and effectively manage system resources, you may detect and fix a variety of operating system issues with the help of this course, which teaches you the fundamentals of any operating system design.

BoardKTU
Scheme2019 New Scheme
YearSecond Year
SemesterS4  Computer Science
SubjectCST200 | Operating Systems Solved Model Question Paper
TypeModel Question Paper Solved
CategoryKTU S4 Computer Science 

To fulfill our mission, we try hard to give you qualified and updated study materials. Here is the answer key for the KTU Operating Systems MODEL QUESTION PAPER in the syllabus. Study well & don't forget to share with your friends...  


Model Question Paper 

APJ ABDUL KALAM TECHNOLOGICAL UNIVERSITY FOURTH SEMESTER B.TECH DEGREE EXAMINATION,

Course Code: CST 206
Course name: OPERATING SYSTEMS
Max Marks: 100 
Duration: 3 Hours

PART-A

(Answer All Questions. Each question carries 3 marks)

1. How does hardware find the Operating System kernel after the system switch-on?

Ans:
The boot loader is pulled from memory and started. The boot loader function is to start the real app. The loader does this by looking at the kernel, loading it to memory, and starting it. When the CPU receives a reset event - for example, when enabled or restarted - the command register is loaded with the predefined memory location, and to do so first. At that point the first bootstrap program. This program is in read-only memory (ROM) because RAM is in an unknown state at the beginning of the system. ROM is simple because it does not need to be started and cannot be easily infected by a computer virus.

2. What is the purpose of a system calls in the operating system?

Ans:
  • Reading and writing from files system calls.
  • If the file system wants to create or delete files, system calls are required.
  • System calls are used to create and manage new processes.
  • Network connectivity requires sending system calls and receiving packets.
  • Access to hardware devices such as scanners, printers, requires a system call

3. Why is context switching considered as an overhead to the system?

Ans: 
Context Switching leads to an overhead cost because of TLB flushes, sharing the cache between multiple tasks, running the task scheduler etc. Context switching between two threads of the same process is faster than between two different processes as threads have the same virtual memory maps. Because of this TLB flushing is not required.

4. How is inter-process communication implemented using shared memory?

Ans:
  • Communication between processes using shared memory requires processes for sharing a certain variety and depends entirely on how the developer will use it.
  • Another way to communicate using shared memory can be considered as follows: Suppose process 1 and process 2 work simultaneously, and share certain resources or use certain information from another process.
  • Process 1 generates information about specific computers or applications and saves it in shared memory.
  • When process 2 requires the use of shared information, it will look at the record stored in the shared memory and pay attention to the information generated by process1 and work accordingly.
  • Processes may use the allocated memory for extracting data as a record from another process and for transmitting any specific information to other processes.


5. Describe the resource allocation graph for the following.
                    a) with a deadlock 
                    b)with a cycle but no deadlock.

Ans:

a) with a deadlock:

R1 and R3 have one instance only.

b) with a cycle but no deadlock.

R1 and R2 have more than one instances

6. What is a critical section? What requirement should be satisfied by a solution to the critical
section problem?

Ans: 
Consider a system consisting of n processes {Po, P1, ..., P11 _ I}. Each process has a piece of code, called a critical section where the process can change common variables, table refresh, file writing, and so on. The important feature of the system is that, when one process is executing in its critical section, no other process is to be allowed to execute in its critical section. Each process must request permission to enter its critical section. The section of code implementing this request is the entry section. The critical section may be followed by an exit section. The remaining code is the remainder section. The general structure of a typical process Pi is shown in figure


A solution to the critical section problem must satisfy the following three requirements:

  1. Mutual exclusion: If the Pi process executing in its critical section, then no other processes can execute in its critical section.
   2. Progress:  Progress: If no process is implemented in its critical section and other processes wish to enter in its critical section, only those processes that do not work in their remaining section can participate in deciding which will enter in its next critical section, and this selection will not be permanently postponed.
  3. Bounded waiting:  There is a limit, or bound, on the number of times that other processes are allowed to enter its critical components after a process are allowed to enter its critical section even before such a request has been granted to avoid Starvation.


7. Consider the reference string 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6. How many page faults occur while using FCFS for the following cases.

                      a) frame=2   
                      b) frame=3
Ans: 

Consider The First Come First Out with 2 Frames and 3 Frames


8. Differentiate between internal and external fragmentations.

Ans:  
There are two types of OS partitions offered as Internal partition, and External partition.

1. Internal Fragmentation:
Internal isolation occurs when memory is divided into blocks of embedded size. Whenever a memory request is requested, a roll-in block is provided. in the event that the memory allocated to a method is somewhat larger than the requested memory, then the difference between the allocated and requested memory is the Internal fragmentations.

2. External Fragmentation:
External fragmentation occurs when there is a sufficient amount of space within the memory to meet the system memory request. but the process memory request cannot be processed because the memory provided is unrelated. Either using a pre-existing or better memory sharing strategy will create external divisions.

9. Compare sequential access and direct access methods of storage devices.

Ans:  

1. Sequential Access Method:

Memory is organized into record units, called a record and each record is associated with an address. In sequential access, method access should be done in line order. A collaborative writing approach is used to re-cross all areas from the current location to the desired location, transfer and reject all intermediate records until the targeted record is not available. Magnetic tape is an example when a sequential access method is used. The List of Simple Data Structures is an example of successive access.

2. Direct Access:

In Direct access, the data is read quickly without duplication from the beginning. With successive access, direct access involves a shared reading approach and a unique address is provided with data or records. Access is done using this address and data or record is easily accessed without searching for a complete memory block. Disks have direct access.

10. Define the terms:
                         (i) Disk bandwidth   (ii) Seek time.
Ans:  

Seak Time

Seak Time time is the time it takes for the hard disk controller to find a specific piece of stored data. Other delays include transfer time (data rate) and rotation delay (delay). If anything is read or written on the disc drive, the read / write head of the disc needs to go to the right place. The actual physical shape of a read/write disk head is called a search. The amount of time it takes to read/write the disk from one part of the disk to another is called the seak time.

Disk bandwidth 

Disk bandwidth is the total number of relay bits separated by the amount of time between the initial service request and the final transfer termination.

PART-B 

(Answer any one question from each module. Each question carries 14 Marks)

11. a) Explain the following structures of the operating system 

                        (i) Monolithic systems
                        (ii) Layered Systems 
                        (iii) Microkernel 
                        (iv) Modular approach

b) Under what circumstances would a user be better of using a time-sharing system than a
PC or a single user workstation? 

Ans 11(a):

The design of the OS depends largely on how the various components of the operating system are connected and integrated into the kernel. Depending on this we have the following components of the operating system:

(i) Simple structures

Such systems do not have a well-defined structure and are small, simple and limited applications. Visual connectors and performance levels are not well separated. MS-DOS is an example of such an operating system. In MS-DOS program systems they are able to access the basic I / O methods. These types of operating systems cause the whole system to crash if one user's system fails.

(ii) Layered structure:

The OS can be split into pieces and retain additional control over the system. In this structure, the OS is divided by a number of layers (levels). The bottom layer (layer 0) is hardware and the top layer (layer N) is the user. These layers are designed so that each layer uses only low-level functions. This simplifies the debugging process as if the standard layers have been fixed and an error occurs during debugging should only be in that layer as the standard layers have already been fixed.

(iii) Microkernel

This structure designs the operating system by removing all non-essential components from the kernel and using them as system programs and users. This results in a small kernel called a micro-kernel.
The advantages of this structure are that all new services need to be added to the user area and do not require the kernel to be modified. It is therefore very secure and reliable as if the service fails and the application remains untouched. Example: MAC Os

(iv) modular structures

It is considered the best OS. Includes modular kernel design. The kernel has a set of key components only and some services are added as loading modules to the kernel during operation or start time. It is similar to a horizontal structure in that each kernel has a defined and secure interaction but is more flexible than a horizontal structure as a module can cost any other module.
Example: Solaris OS
Ans 11(b): 

 The user gets better under three conditions: if it is cheaper, faster, or easier. For example:

1. When a user pays administration costs, and the cost is cheaper in a system sharing time than a single user computer.

2. If you are using a simulation or calculation that takes too long to work on a single PC or workspace.

3. If the user is mobile and does not have a portable computer to manage it, he can remotely connect to a time-allocated system and perform his duties.
OR

12. a) What is the main advantage of the microkernel approach to system design? How do user
programs and system programs interact in a microkernel architecture? (8)

b) Describe the differences between symmetric and asymmetric multiprocessing? What are
the advantages and disadvantages of multiprocessor systems? (6)

Ans 12 (a):

There are many benefits to using a micro-kernel over a program structure but the most important are the following:

(a) adding a new service does not require kernel modification, 

(b) it is more secure as more functions are performed in user mode than kernel mode, and 

(c) simpler kernel configuration and operation often results in more reliability. operating system. 

User programs and system resources interact in micro-kernel architecture through communication processes such as messaging. 

These messages are transmitted by the operating system. The main disadvantage of micro-kernel architecture is the accumulation associated with the interaction between processes and the frequent use of operating system messaging functions to enable the user process and system service to communicate.

Ans 12 (b):

Symmetric Multiprocessing System:
in this case, each processor uses the same copy of the OS and can communicate as needed. Example: all modern OS (Windows NT, UNIX, LINUX, windows 7,10).

Asymmetric Multiprocessing program:
master-slave concept. The primary processor controls the system, the other processor turns to the main to be taught or have a predefined function. For example SunOS v4.

Benefits of multiprocessor systems:

  • Adding output
  • economy of scale
  • more honesty
Disadvantages:

  • A regular computer bus, a clock, memory and boundary devices.
  • The cost is higher

13. a) Define process. With the help of a neat diagram explain different states of the process. (8)

b) Explain how a new process can be created in Unix using a fork system call. (6)

Ans 13 (a):

The process is a program in execution. The process is defined as a business that represents the basic unit of work to be used in the system. If the system is loaded into memory and becomes a process, it can become a process. divided into four categories ─ stack, bulk, text and data.

When the process begins, it passes through different regions. These categories may differ for different operating systems, 

  • Start, 
  • Ready, Running, 
  • Waiting, 
  • Terminated/Exit

Generally, the procedure can have one of the following five conditions at a time.



Process Control Block (PCB)

Process Control Block is a data framework maintained by the Active System throughout the process. The PCB is identified by a complete process ID (PID). The PCB stores all the information needed to track the process

Ans 13 (b):

Fork system wire is used to create a new process, called the child process, which works in tandem with the fork process () (parent process). After a new child process is created, both processes will apply the following command following a fork system call (). The child process uses a pc (same program counter), the same CPU registers, the same open files using the parent process.

It does not take parameters and returns the total value. Below are the different values ​​returned by 

fork ()
OR

14 a) Find the average waiting time and average turnaround time for the processes given in the
the table below using:-

                     i) SRT scheduling algorithm 
                    ii) Priority scheduling algorithm

  Process     Priority            Arrival         CPU Burst        
                                          Time (ms)     Time (ms)
                
       P1             0                        5                 3
       P2             2                        4                 1
       P3             3                        1                 2
       P4             5                        2                 4

b) What is a Process Control Block? Explain the fields used in a Process Control Block. (5)

Ans 14 (a):

Ans 14 (b):

Process Control Block (PCB)

Process Control Block is a data framework maintained by the Active System throughout the process. The PCB is identified by a complete process ID (PID). The PCB stores all the information needed to track the process as listed below
1)Process State

The current state of the process i.e., whether it is ready, active, waiting, or whatever.

2)Copyright

This is required to enable/disable access to system resources.

3)Processing ID

Unique identification of each process in the application.

4)Identifier

Indicator of parental process.

5)Program Counter

The Program Counter refers to the address of the following command to be used in this process.

6)CPU Registers

CPU registers are different where the process needs to be maintained in order to be operational.

7)CPU Configuration Information

Essential process and other planning information are needed to plan the process.

8)Memory management information

This includes page table information, memory limits, Partial table depending on the memory used by the application.

9)Accounting information

This includes the amount of CPU used to process, time limits, performance ID etc.

10)I/O status information

This includes a list of I / O devices assigned to the process.

The structure of a PCB depends entirely on the Application and may contain different information for different operating systems. Here is a simplified drawing of PCB -

15. Consider a system with five processes P0 through P4 and three resources of type A, B, C.
Resource type A has 10 instances, B has 5 instances and C has 7 instances. Suppose at time
to following snapshot of the system has been taken:

Operating System Deadlock Problem


i) What will be the content of the Need matrix? Is the system in a safe state? If Yes, then what is
the safe sequence? (8)

ii)What will happen if process P1 requests one additional instance of resource type A and two
instances of resource type C? (6)

Ans 15 (i):

Need [i] =  Max [i] - Allocation [i]
Need for Proces P1:  (7, 5, 3) - (0, 1, 0) = 7, 4, 3
Need for Proces P2:  (3, 2, 2) - (2, 0, 0) = 1, 2, 2
Need for Proces P3:  (9, 0, 2) - (3, 0, 2) = 6, 0, 0
Need for Proces P4:  (2, 2, 2) - (2, 1, 1) = 0, 1, 1
Need for Proces P5:  (4, 3, 3) - (0, 0, 2) = 4, 3, 1

context of the need matrix is :

Process Need
A  B C
P1 7 4 3
P2 1 2 2
P3 6 0 0
P4 0 1 1
P5 4 3 1

Step 1: For Process P1:

Need <= Available

7, 4, 3 <= 3, 3, 2 It is false.

So, check another process, P2.

Step 2: For Process P2:

Need <= Available

1, 2, 2 <= 3, 3, 2 It is true

New available = available + Allocation

(3, 3, 2) + (2, 0, 0) => 5, 3, 2

Now we check another process P3.

Step 3: For  Process P3:

P3 Need <=  Available

6, 0, 0 < = 5, 3, 2 It is false.

Now we check another process, P4.

Step 4: For Process P4:

P4 Need  <= Available

0, 1, 1 <= 5, 3, 2 It is true

New Available resource = Available + Allocation

5, 3, 2 + 2, 1, 1 => 7, 4, 3

Now check another process P5.

Step 5: For Process P5:

P5 Need <= Available

4, 3, 1 <= 7, 4, 3 It is true

New available resource = Available + Allocation

7, 4, 3 + 0, 0, 2 => 7, 4, 5

Now, we check again resources for processes P1 and P3.

Step 6: For Process P1:

P1 Need <= Available

7, 4, 3 <= 7, 4, 5 It is true

New Available Resource = Available + Allocation

7, 4, 5 + 0, 1, 0 => 7, 5, 5

Now we check another process P2.

Step 7: For Process P3:

P3 Need <= Available

6, 0, 0 <= 7, 5, 5 It is true

New Available Resource = Available + Allocation

7, 5, 5 + 3, 0, 2 => 10, 5, 7

So all processes are executed at once Hence, we used the banker's algorithm to find the safe state and the safe sequence from the banker's algorithm. Safe Sequecne is:  P2, P4, P5, P1 and P3.

Ans 15 (ii):

For granting the (1, 0, 2) Request , first we have to check that Request <= Available, that is (1, 0, 2) <= (3, 3, 2), since the condition is true. So the process P1 gets the request immediately. 

OR

16. a) State dining philosopher’s problem and give a solution using semaphores. (7)

b) What do you mean by binary semaphore and counting semaphore? With C struct, explain
the implementation of wait () and signal()

Ans 16 (a):

The Dining Philosopher Problem - The Dining Philosopher Problem states that K philosophers sat around a round table with one stick in each pair of philosophers. There is one rod between each philosopher. A philosopher may eat if he can pick up the two chopsticks adjacent to him. One chopstick may be picked up by any one of its adjacent followers but not both.



Semaphore Solution in Dining Philosopher -

Each philosopher is represented by the following pseudocode:

               process P[i]
                      while true do {
                           THINK;
                           PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]); 
                           EAT; 
                           PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5])
                        }

There are three conditions of a philosopher: THINKING, HUNGER, AND FOOD. Here are two semaphores: Mutex and the semaphore array of philosophers. Mutex is used in such a way that no two philosophers can access a pickup or lay down at the same time. The list is used to control the behaviour of each philosopher. However, semaphores can lead to suspension due to configuration errors.

Ans 16 (b):

In computer science, the semaphore is a type of flexible or invisible data used to control the access that is commonly used by many processes in the same system as a multi-program operating system.
The semaphores that allow the calculation of the app incorrectly are called computational semaphores, while the semaphores limited to the numbers 0 and 1 (or locked/unlocked, unavailable/available) are called binary semaphores and are used to use locks.

Implementation of semaphores:

Implementation of counting semaphores:
 wait(Semaphore s){
  while (s==0);    /* wait until s>0 */
    s=s-1;
}

signal(Semaphore s){
    s=s+1;
}

Init(Semaphore s, Int v){
    s=v;
}

Implementation of binary semaphores:
do
{
    wait(s);
    // critical section
    signal(s);
    // remainder section
} while(1);

17. a) Consider the following page reference string 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2,
3, 6. Find out the number of page faults if there are 4-page frames, using the following
page replacement algorithms  (9)

                                  i) LRU 
                                 ii) FIFO 
                                iii) Optimal 

b) Explain the steps involved in handling a page fault. (5)

Ans 17 (a):

We are given with following page reference string:

Ans 17 (b):

  • The requested memory address is checked first, to ensure that it was a valid memory request.
  • If the reference is invalid, the process is terminated. If not, the page should be included in the page.
  • A free frame is available, probably from a free frame list.
  • The disk function is designed to deliver the required page from the disk. (This will usually prevent the process from waiting for I / O, allowing another process to use the CPU at the moment.)
  • When the I / O function is complete, the process page table is updated with a new frame number, and the invalid bit is changed to indicate that this is now a valid page reference.
  • The command that caused the page error should now be restarted from the beginning, (as soon as this process reopens the CPU.)
OR

18. a) With a diagram, explain how paging is done with TLB. (5)

b) Memory partitions of sizes 100 kb, 500 kb, 200 kb, 300 kb, 600 kb are available, how
would best, worst and first-fit algorithms place processes of size 212 kb, 417 kb, 112 kb,
426 kb in order. Rank the algorithms in terms of how efficiently they use memory. (9)

Ans 18 (a):

Translation Lookaside Buffer (TLB) is nothing but a special repository used to track recently used tasks. TLB contains the most recently used page table entries. With a given physical address, the processor checks the TLB if the page table input is present (TLB hit), the frame number is retrieved and the actual address is created. If the page table entries are not found in the TLB (TLB is not available), the page number is used as a reference when processing the page table. TLB first checks if the page is already in the main memory, if it is not in the main memory the page error is removed and the TLB is updated to insert new page entries.

Steps in TLB hit:
  • The CPU generates a visual (logical) address.
  •  Tested on TLB (currently).
  •  The corresponding frame number is returned, which now tells us where it is on the main memory page.
Steps to miss TLB:
  • The CPU generates a visual (logical) address.
  •  Tested in TLB (missing).
  •  The page number is now matched to the page table that resides in the main memory (assuming the page table contains all PTE).
  •  The corresponding frame number is returned, which now tells us where it is on the main memory page.
  •  TLB is updated with the new PTE (if space is not available, one of the options comes with the image i.e. FIFO, LRU or MFU etc).
 
Effective memory access time(EMAT): TLB is used to reduce effective memory access time as it is a high-speed associative cache. 

EMAT = h*(c+m) + (1-h)*(c+2m) 
where, h = hit ratio of TLB 
m = Memory access time 
c = TLB access time 

Ans 18 (b):

First-fit:

In the First fit, we are allocating to the first free partition only which can accommodate it.

 212K can be put in 500K partition (Remaining = 500-212 = 288)

 417K can be put in 600K partition (Remaining = 600-417 = 183)

 112K can be put in 288K partition (new partition 288K (remaining partition of 212k ))
 
426K must wait


 Best-fit:

In Best fit, we are allocating the smallest free partition. Here firstly searches the entire free partition from the list and find the smallest partition and allocate it

 212K can be put in 300K partition (Remaining = 300-212 = 88)

 417K can be put in 500K partition (Remaining = 500-417 = 83)

 112K can be put in 200K partition (Remaining = 200-112 = 88)

 426K can be put in 600K partition (Remaining = 600-426 = 174)


 Worst-fit:

In Worst Fit, We are allocating to the largest partition, that is we are allocating to the largest available partition and allocate them.

 212K can be put in 600K partition (Remaining = 600-212 = 388)

 417K can be put in 500K partition (Remaining = 500-417 = 83)

 112K can be put in 388K partition (Remaining = 388-112 = 274)

 426K must wait
 
The best fit is the best case


19. a) Suppose that a disk drive has 5000 cylinders, numbered 0 to 4999. the drive currently
services a request at cylinder 143, and the previous request was at cylinder 125. the queue
of pending requests in the FIFO order is 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130.
Starting from the current position, what is the total distance (in cylinders) that the disk arm
moves to satisfy all pending requests for each of the following algorithms(10)

                                      i) FCFS 
                                     ii) SSFT 
                                    iii) SCAN 
                                    iv) LOOK 
                                     v) C-scan 

b) What is the use of the access matrix in the protection mechanism? (4)

Ans 19 (a):

i) FCFS

Distances:

125 to 143=18;
143 to 86=57; 
86 to 1470=1384; 
1470 to 913 =557; 
913 to 1774=861; 
1774 to 948=826; 
948 to 1509=561; 
1509 to 1022=487; 
1022 to 1750=728; 
1750 to 130=1620;
Total distance moved by the disk arm=7099

ii) SSTF

Distances:

1022=74; 1022 to  1470=448; 1470 to 1509=39; 1509 to  1750= 241; 1750 to 1774=24;


Total distance moved by the disk arm=1763.

iii) SCAN

Distances
            125 to 86=39; 86 to 130=44; 130 to 143=13; 143 to 913=770;913 to 948= 35; 948 to 1022=74; 1022 to 1470=448; 1470 to 1509=39; 1509 to 1750=241; 1750 to 1774=24;

Total distance moved by the disk arm=1727

iV) LOOK

LOOK: Improvement on SCAN scheduling

    Distances
                125 to 86=39; 86 to 130=44; 130 to 143=13; 143 to 913=770; 913 to 948= 35; 948 to 1022=74; 1022 to 1470=448; 1470 to 1509=39; 1509 to 1750=241; 1750 to 1774=24;

    Total distance moved by the disk arm=1727

    V) C-SCAN
    Ans 19 (b):

    Access Matrix is ​​a secure security model for computer systems. It is represented as a matrix. The access matrix is ​​used to define the rights of each process performed on the domain in relation to each item. The matrix lines represent the domains and columns that represent objects. Each matrix cell represents a set of access rights granted to domain processes means each entry (i, j) describes a set of functions the process performed on the Di domain can request in the Oj object.
    OR

    20. a) Explain the different file allocation operations with advantages and disadvantages. (8)

    b) Explain the following i) file types ii) file operation iii) file attributes

    Ans 20 (a):

    Contiguous Allocation: – 
    Contiguous allocation is one of the most widely used distribution methods. A parallel distribution means we allocate a block in such a way that on a hard disk, all blocks receive a visible block.

    Advantages:

    The benefits are:

    • The corresponding sharing method provides excellent learning performance.
    • Combined sharing is easy to use.
    • The corresponding distribution method supports both types of file access methods which are sequential access and direct access.
    • The contiguous distribution method is faster because in this way the number of claims is smaller due to the combined distribution of file blocks.
    Disadvantages:

    Disadvantages of the method are:

    • With an integrated distribution system, sometimes the disk can be partitioned.
    • In this way, it is difficult to increase the file size due to the availability of memory blocks.
    Link List Allocation
    The linked listing method overcomes the obstacles of the integrated distribution system. In this file-sharing method, each file is treated as a linked list of disk blocks. In the case of a shared list, it is not necessary for the disk blocks assigned to a particular file to be arranged in a consistent manner on the disk. The directory includes the first pointer to the file block and the last file block. Each disk block allocated or assigned to a file contains a pointer, and that pointer identifies the next disk block, assigned to the same file.

    Advantages:
    There are various benefits to providing a linked list:

    • In the linked list, there are no external divisions. Because of this, we can use memory better.
    • In a linked list allocation, the listing includes only the address of the original block.
    • The connected distribution method is flexible because we can increase the file size faster because, in this case, to provide the file, we do not need part of the memory in an integrated way.
    Disadvantages:
    There are various disadvantages of affiliate listing:

    • The provisioning of the linked list does not support direct access or random access.
    • For a shared listing, we need to break each block.
    • If the linked list identifier cuts the linked list quota, the file will be corrupted.
    • In the cursor disk block, it needs more space.

    Indexed Allocation

     The indexed sharing method is another method used for file sharing. In the direction of the index, we have an additional block, and that block is known as the reference block. In each file, there is a reference block for each. In the index block, the ith entry contains the disk address of the ith block file. We can see in the diagram below that the directory entry includes the reference block address.

    Advantages:

    The benefits of indexed allocation are:

    • The index allocation method solves the problem of external isolation.
    • Index distribution provides direct access.
    • Disadvantages of Index Allocation
    Disadvantages:

    • In indicator distribution, the overhead pointer is over.
    • We may lose the entire file if the reference block is incorrect.
    • It is a total waste to build a small file index.
    Ans 20 (b):

    File Types

    Specifies the ability of the operating system to distinguish different types of files such as text files, binary, and source files. However, operating systems such as MS_DOS and UNIX have the following file types:

    Special Character File:

    It is a hardware file that reads or writes data by letters, such as a mouse, printer, and more.

    Regular files:
    • These file types store user information.
    • It can be text, applications, and websites.
    • Allows the user to perform tasks such as adding, removing, and changing.
    List files:
    • The directory contains files and other related information about those files. It is basically a folder for capturing and editing multiple files.
    • Special Files
    • These files are also called device files. Represents portable devices such as printers, disks, networks, flash drives, etc.
    File Operations

    File a logically related data collection recorded in second storage in the form of a sequence of tasks. The contents of the files are defined by your creator of the file. Various functions that can be used in a file such as reading, writing, opening and closing etc. are called file functions. These tasks are performed by the user using the instructions provided by the application. Some common tasks are the following:

    1. Create operation:
    2. Open operation:
    3. Write operation:
    4. Read operation:
    5. Re-position or Seek operation:
    6. Delete operation:
    7. Truncate operation:
    8. Close operation:
    9. Append operation:
    10. Rename operation:

    File Attributes

    1. Name

    Every file has a file name that is known to the file system. One directory cannot contain two files of the same name.

    2. Identifier

    Along with the name, each file has its own extension that identifies the file type. For example, a text file has a .txt extension, a video file can have a .mp4 extension.

    3. Type

    In the File System, Files are categorized into different types such as video files, audio files, text files, usable files, etc.

    4. Location

    In the File System, there are a few places where files can be stored. Each file has its own location as its attribute.

    5. Size

    The file size is one of its most important attributes. By file size, we mean the number of bytes the file found in the memory.

    6. Protection

    The computer administrator may require different protections for different files. So each file has its own set of permissions for a different group of users.

    7. Time and Date

    Each file contains a timestamp containing the time and date the file was last modified.

    ---------------

    You May Like :

    We hope the given KTU S4 CST200 Operating Systems Solved Model Question based on the 2019 scheme will help you in your upcoming Examinations. If you like this share it with your friends.

    "Share KeralaNotes.Com with your friends"

    We have solved the Operating Systems exemplar question paper in the syllabus for S4 CSE students. Explain all the questions as asked in the sample question paper for this KTU 2019 syllabus lesson. Operating Systems model question paper is for CSE S4 students preparing for the KTU 2019. Examination course. Includes all the chapters in the KTU 2019 syllabus Theory. preparing for the KTU 2019 exam in Graph theory, this questionnaire will help you. Operating Systems is one of the most important topics in the KTU syllabus. Model question paper question bank of Operating Systems will definitely help you to score good marks with the Answer key provided

    If you have any queries regarding the KTU S4 Computer Science (CSE) Study Materials, drop a comment below and we will get back to you at the earliest.

    Keralanotes.com      Keralanotes.com      Keralanotes.com      Keralanotes.com      Keralanotes.com      
    To Top

    Join Our Whatsapp and Telegram Groups now...