Register Login

50 Best SAP BW Interview Questions and Answers

Updated Jun 28, 2019

Q1. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?

Ans: But how is it possible?.If you load it manually twice, then you can delete it by request.

Q2. CAN U ADD A NEW FIELD AT THE ODS LEVEL? 

Sure you can.ODS is nothing but a table.

Q3. CAN NUMBER OF DATASOURCE HAS ONE INFOSOURCE

Yes of course.For example, for loading text and hierarchies we use different data sources but the same infosource.

Q4. BRIEF THE DATAFLOW IN BW. 

Data flows from transactional system to analytical system(BW).DS on the transactional system needs to be replicated on BW side and attached to infosource and update rules respectively.

Q5. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER RULES? 

Q6. WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?

Full and delta.

Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LO-COCKPIT? 

No lis in lo cockpit.We will have data sources and can be maintained(append fields).Refer white paper on LO-Cokpit extractions.

Q8. SIGNIFICANCE OF ODS. 

It holds granular data.

Q9. WHERE ARE THE PSA DATA IS STORED? 

In PSA table.

Q10.WHAT IS DATA SIZE?

The volume of data one data target holds(in no.of records)

Q11. DIFFERENT TYPES OF INFOCUBES.

Basic, Virtual (remote, SAP remote and multi)

Q12. INFOSET QUERY.

Can be made of ODSs and objects

Q13. IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.

In R/3 or in BW??.2 in R/3 and 2 in BW

Q14. ROUTINES?

Exist In the info object, transfer routines, update routines and start routine

Q15. BRIEF SOME STRUCTURES USED IN BEX.

Rows and Columns, you can create structures.

Q16. WHAT ARE THE DIFFERENT VARIABLES USED IN BEX? 

  • Variable with default entry
  • Replacement path
  • SAP exit
  • Customer exit
  • Authorization

Q17. HOW MANY LEVELS YOU CAN GO IN REPORTING? 

You can drill down to any level you want using Nav attributes and jump targets

Q18. WHAT ARE INDEXES? 

Indexes are data base indexes, which help in retrieving data fastly.

Q19. DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.

Help!!!!!!!!!!!!!!!!!!!Refer documentation

Q20. IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED. 

Nope

Q21. WHAT IS THE SIGNIFICANCE OF KPI'S? 

KPI's indicate the performance of a company.These are key figures

Q22. AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION. 

After image(correct me if I am wrong)

Q23. REPORTING AND RESTRICTIONS. 

Help!!!!!!!!!!!!!!!!!!!Refer documentation

Q24. TOOLS USED FOR PERFORMANCE TUNING.

ST*,Number ranges,delete indexes before load ..etc

Q25. PROCESS CHAINS: IF U ARE USED USING IT THEN HOW WILL U SCHEDULING DATA DAILY. 

There should be some tool to run the job daily(SM37 jobs)

Q26. AUTHORIZATIONS. 

Profile generator

Q27. WEB REPORTING.
What are you expecting??

Q28. CAN CHARECTERSTIC CAN BE INFOPROVIDER ,INFOOBJECT CAN BE INFOPROVIDER. 

Of course

Q29. PROCEDURES OF REPORTING ON MULTICUBES.

Refer help.What are you expecting??.Multicube works on Union condition

Q30. EXPLAIN TRANSPORTATION OF OBJECTS? 

Dev ---> Q and Dev ---> P

1. What is table partition? 

A: SAP is using fact table partitioning to improve the performance. you can partition only on 0CALMONTH or 0FISCPER

2. What are the options available in transfer rule and when ABAP code is recquired during the transfer rule what important variables you can use? 

A: Assign info object, Assign a Constant, ABAP routine or a Formula

3. How would you optimize the dimensions? 

A: Use as many as possible for performance improvement; Ex: Assume that u have 100 products and 200 customers; if you make one dimension for both ,the size of the dimension will be 20000; if you make individual dimensions then the total number of rows will be 300. Even if you put more than one characterstic per dimension, do the math considering worst case senerio and decide which characterstics may be combined in a dimension.

4. What are the conversion routines for units and currencies in the update rule? 

A: Time dimensions are automatically converted; Ex: if the cube contains calender month and your transfer structure contains date, the date to calender month is converted automatically.

5. Can you make an infoobject as info provider and why? 

A. Yes, When you want to report on characterstics or master data, you can make them as infoprovider. Ex: you can make 0CUSTMER as infoprovider and do Bex reporting on 0 CUSTOMER;right click on the infoarea and select 'Insert characterstic as data target'.

6. What are the steps to unload non cumulative cubes? 

Ans:

1. Initialize openig balance in R/3(S278)
2. Activate extract structure MC03BF0 for data source 2LIS_03_BF
3. setup historical material docus in R/3.
4. load opening balance using data source 2LIS_40_s278
5. load historical movements and compress without marker update.
6. setup V3 Update
7. load deltas using 2LIS_03_BF

7. Give step to step approach to archiving cubex. 

Ans:

1. double click on the cube (or right click and select change)
2. Extras -> Select archival
3. Choose fields for selection(like 0CALDAY, 0CUSTOMER..etc)
4. Define the file structure(max file size and max no of data objects)
5. Select the folder(logical file name)
6. Select the delete options (not scheduled, start automatically or after event)
7. activate the cube.
8. cube is ready for archival.

8. What are the load process and post processing? 

A: Info package, Read PSA and update data target, Save Hierarchy, Update ODS data object, Data export(open hub), delete overlapping requests.

9. What are the data target administration task 

A: delete index, generate index, construct database statistics, initial fill of new aggregates, roll up of filled aggregates, compression of the infocube,activate ODS, complete deletion of data target.

10. What are the parallel process that could have locking problems 

Ans:

1. heirachy attribute change run
2. loading master data from same infoobject; for ex: avoid master data from different source systems at the same time.
3. rolling up for the same info cube.
4. selecting deletion of info cube/ ODS and parallel loading.
5. activation or delection of ODS object when loading parallel.

11. How would you convert a info package group into a process chain?

A: Double Click on the info package grp, click on the 'Process Chain Maint' button and type in the name and descrition ; the individual info packages are inserted automatically.

12. How do you transoform Open Hub Data? 

A: Using BADI

13. What are the data loading tuning one can do? 

Ans:

1. watch the ABAP code in transfer and update rules;
2. load balance on different servers
3. indexes on source tables
4. use fixed length files if u load data from flat files and put the file on the application server.
5. use content extractor
6. use PSA and data target inparallel option in the info package
7. start several info packagers parallel with different selection options
8. buffer the SID number ranges if u load lot of data at once
9. load master data before loading transaction data.

14. What is ODS? 

A: Operations data Source . You can overwrite the existing data in ODS.

15. What is the use of BW Statistics?

A: The sets of cubes delivered by SAP is used to measure performance for query, loading data etc., It also shoes the usage of aggregates and the cost associated with then.

16. What are the options when defining aggregates? 

Ans:

* - groups according to characteristics
H - Hierarchy
F - fixed value
Blank --- none

17. How will you debug errors with SAP GUI (like Active X error etc) 

A: Run Bex analyzer -> Business Explorer menu item -> Installation check; this shows an excel sheet with a start button; click on it; this verifies the GUI installation; if u find any errors either reinstall or fix it.

18. When you write user exit for variables what does I_Step do? 

A: I_Step is used in ABAP code as a conditional check.

19. How do you replace a query result from a master query to a child query?

A: If you select characteristic value with replacement path then it used the results from the previous query; for ex: let us assume that u have query Q1 which displays the top 10 customers, we have query Q2 which gets the top 10 customers for info object 0customer with as a variable with replacement path and display detailed report on the customers list passed from Q1.

20. How do you define exception reporting in the background?

A: Use the reporting agent for this from the AWB. Click on the exception icon on the left; give a name and description. Select the exception from query for reporting(drag and drop).

21. What kind of tools are available to monitor the overall Query Performance? 

22. What can I do if the database proportion is high for all queries?

23. How to call a BW Query from an ABAP program?

24. What are the extractor types? 

25. What are the steps to extract data from R/3? 

26. What are the steps to configure third party (BAPI) tools?

27. What are the delta options available when you load from flat file? 

28. What are the table filled when you select ABAP?

29. What are the steps to create classes in SAP-BW?

30. How do you convert from LIS to LO extraction?

31. How do you keep your self-updated on major SAP Develppments, Give an analytical?

 


Comments

  • 01 May 2008 5:29 pm Guest
    can any one give physical interprettation for info source and data source,transfer structure and communication structure.
  • 23 Nov 2017 4:36 pm Guest

    Definition : 

          Lo extractor is used to extract logistics data from sap r/3 to sap bw.

    In older days LIS used in place of LO.

    Differences of LIS and LO:

     Both are used to extract the logistics data.

    LIS :

    1. LIS uses transparent tables. 
    2. LIS uses v1 and v2 update modes only.
    3. LIS is customer generated extractors.
    4. LIS generates transfer structure.
    5. We need to create data sources and everything.
    6. Two transparant tables used in delta mechanism. 

     LO :

    1. LO uses cluster tables. 
    2. LO uses v1 and v2 and v3 update modes. 
    3. LO is bw content extractor. 
    4. LO generates extract structure. 
    5. In LO we have pre-defined data sources. 
    6. Extraction queue,Update queue,Delta queue used  in delta mechanism.   

          Major difference is LO provides  data sources  specific to level of information for ex : 2LIS_02_HDR purchasing order header level data,2LIS_02_ITM purchasing order item level data,2LIS_02_SCL purchasing order schedule level data.But in case of LIS  it is specific to the particular application.

    FUNCTIONS OF LO - COCKPIT (LBWE)
     1. Maintenance of extract structure.
     2.Maintenance of data sources.
     3.Activating updates.
     4.Controlling updates.

    • Direct Delta.
    • Unserialized V3 Update.
    • Queued Delta.  

    In LO - Cockpit we use setup tables to Init/Full upload.These are also called cluster tables.

     

    UPDATE MODES : 

     

    V1 -- Synchronous update..It will transfer the data in synchronous manner means if  user A and user B are posting the documents.For ex user A completes the document than before B completion..User A submitted the document... after it is successfully posted into specific queue  only the next document  will be posted into queue. For suppose the document submitted by the user A fails then the documents from the other sources will wait untill it gets resolved. 

     

    V2 -- Asynchronous update..Even if the document posted by the user A fails the documents from the next users will be updated to the specific queue..no matter which document submitted first.

     

    V3 -- Asynchronous batch v3 update ..The documents will be posted in background in the specific time period. V3 is specially for bw extraction.

     

    Direct Delta :

     When we do info package with full load the data will be taken from setup table.So we need to fill the setup table (It also called statistical setup) by using the t-code OLI3BW for purchasing order data.Before going to do the statistical setup  we need to delete the data from the setup table using t-codes LBWG  or SE14 .

    LBWG -- Used to delete the data from entire application for example 02(purchasing),03(inventory),11(sales) etc....

    SE14 -- Used to delete the data from specific data source for example 2LIS_02_HDR (from purchasing document header level data) , 2LIS_11_VAITM (from sales document item level data) etc...

     

    Q ) Why do we need to delete the data from the setup table?

    A) Because it may contain some garbage value.

     

    First we need to lock the transactions using t-code SM01 .

     ME21(create),ME22(change),ME23(display).

     

    Accessing the application tables are not permitted  hence setup tables are there to collect the required data from the application tables.

     

     Enter t-code OLI3BW for filling the setup table with purchasing document data(02).

    Setup table gets the data from the appln.tables(vbak,vbap,ekko,ekpo etc..) through comminication structure as shown in the above figure.Then the entire data will be loaded to the datasource from setup table.. through extract structure when u do infopackage with full load. 

     

    In Direct Delta when u do the document posting the data records(new or modified) directly posted into BW Delta Queue(RSA7) and  into the Application tables at the same time.

     

    Advantages:

     

       1. Using this we can send max. upto 10000 data records.

       2. No need to schedule the V3 job control to transfer the data into bw .

     

    Disadvantages:

     

         1.For each document posting the data will be stored  in LUW of bw delta queue.

         2.Different document changes are not summarized into single LUW ,so number of LUW's are significantly  increased for each document posting.So it decreases the performance of the OLTP system. 

        So this method only  recommended for the customers with  low occurances of documents.  

     

    Unserialized V3 Update : 

     

           

     When the document is posting the data will be loaded into application tables(v1) and update queue instead of  storing into bw delta queue (v2).

     

      Using v3 job control the data will be posted to bw delta queue(in specific time period).

     

    Advantages :

      

        1.We can load upto 100000 records.

        2.Using v3 job control we can schedule the time period for data transfer.
        3.This is recommended when we want to send the data to Info cubes or Dso's with additive mode.

     

    Disadvantages :

     

         Serialization is not ensured.

     

     

    Queued Delta :

        

           

     When document is posting the data records will be posted into extraction queue and update in bw delta queue using v3 job control.

     

    Advantages :

          1.Supports multiple languages.

          2.We can load more than 100000 records.

          3.Serialization is possible.

          4.Changed or Delta records are summarized into single LUW so it increases the OLTP performance.

          5.Using v3 job control we can schedule the time period for transfer the data to bw delta queue.

    Q) What is LUW?
        It is a logical unit of work.It works like a changelog table in the dso and it maintains new image N,before image X,after image ' '  for changed/modified records.

     


×