Additions in ODS
Hi gurus,
I want to add PO Line Item Attributes (they are order line number,order line unit price and invoice unit price) in already existing ODS. Tell me the Procedure & where we can add these & how to add these in existing ODS? with detailed steps?
Any inputs r greatly appreciated,
Thanks inadvance,
yogeswaran.
Hi Yogi,
You can add new data fields only for existing ODS without deleting data in ODS.
Goto ODS change mode and choose data fields right click and add new objects.
check and activate.
For historic data these fields will be blank.
Hope it helps
Srini
Similar Messages
-
Hi Friends,
I was face some Interview.
Please send answers to the questions?
How many data Fields and key fields we can create in DSO?
You can overwrite key fields or Data Fields?
Which up date we use in Delta queue extraction( v1 or v2 or v3)
Which message we get when transported request is failed?
what is the Structural difference between Infoucbe and DSO
Data Loading is taking huge time when we extract data from source system to BI system/ how to solve?(Before it took 3-4 Hours now data loading takes 4 days)What is the difference between Display Attribute and
Navigational Attribute? How to make display attribute and navigational
attribute?
How to load flat file data?
How to load Hierarchy file data?
What is HACR?
How to maintain HACR?
If any issue in HACR then how to resolve the issue?
What is Baby Cube?
Why we are creating Aggregates?
What is the use of Aggregates?
Is there
any particular field on that we can create Aggregates or we can maintain
Aggregate on any field?
What is
the different DSO available? And what is the difference between those DSO?
What is
replacement path?
What are
the extractor types?
• Application Specific
o BW Content FI, HR, CO, SAP CRM, LO Cockpit
o Customer-Generated Extractors
LIS, FI-SL, CO-PA
• Cross Application (Generic Extractors)
o DB View, InfoSet, Function Module
2. What are the steps involved in LO Extraction?
• The steps are:
o RSA5 Select the DataSources
o LBWE Maintain DataSources and Activate Extract Structures
o LBWG Delete Setup Tables
o 0LI*BW Setup tables
o RSA3 Check extraction and the data in Setup tables
o LBWQ Check the extraction queue
o LBWF Log for LO Extract Structures
o RSA7 BW Delta
Queue Monitor
3. How to create a connection with LIS InfoStructures?
• LBW0 Connecting LIS InfoStructures to BW
4. What is the difference between ODS and InfoCube and MultiProvider?
• ODS: Provides granular data, allows overwrite and data is in transparent
tables, ideal for drilldown and RRI.
• CUBE: Follows the star schema, we can only append data, ideal for primary
reporting.
• MultiProvider: Does not have physical data. It allows to access data from
different InfoProviders (Cube, ODS, InfoObject). It is also preferred for
reporting.
5. What are Start routines, Transfer routines and Update routines?
• Start Routines: The start routine is run for each DataPackage after the data
has been written to the PSA and before the transfer rules have been executed.
It allows complex computations for a key figure or a characteristic. It has no
return value. Its purpose is to execute preliminary calculations and to store
them in global DataStructures. This structure or table can be accessed in the
other routines. The entire DataPackage in the transfer structure format is used
as a parameter for the routine.
• Transfer / Update Routines: They are defined at the InfoObject level. It is
like the Start Routine. It is independent of the DataSource. We can use this to
define Global Data and Global Checks.
6. What is the difference between start routine and update routine, when, how
and why are they called?
• Start routine can be used to access InfoPackage while update routines are
used while updating the Data Targets.
7. What is the table that is used in start routines?
• Always the table structure will be the structure of an ODS or InfoCube. For
example if it is an ODS then active table structure will be the table.
8. Explain how you used Start routines in your project?
• Start routines are used for mass processing of records. In start routine all
the records of DataPackage is available for processing. So we can process all
these records together in start routine. In one of scenario, we wanted to apply
size % to the forecast data. For example if material M1 is forecasted to say
100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra
Large 20%), we wanted to have 4 records against one single record that is
coming in the info package. This is achieved in start routine.
9. What are Return Tables?
• When we want to return multiple records, instead of single value, we use the
return table in the Update Routine. Example: If we have total telephone expense
for a Cost Center, using a return table we can get
expense per employee.
10. How do start routine and return table synchronize with each other?
• Return table is used to return the Value following the execution of start
routine
11. What is the difference
between V1, V2 and V3 updates?
• V1 Update: It is a Synchronous update. Here the Statistics update is carried
out at the same time as the document update (in the application
tables).
• V2 Update: It is an Asynchronous update. Statistics update and the Document
update take place as different tasks.
o V1 & V2 don't need scheduling.
• Serialized V3 Update: The V3 collective update must be scheduled as a job
(via LBWE). Here, document data is collected in the order it was created and
transferred into the BW as a batch job. The transfer sequence may not be the
same as the order in which the data was created in all scenarios. V3 update
only processes the update data that is successfully processed with the V2
update.
12. What is compression?
• It is a process used to delete the Request IDs and this saves space.
13. What is Rollup?
• This is used to load new DataPackages (requests) into the InfoCube
aggregates. If we have not performed a rollup then the new InfoCube data will
not be available while reporting on the aggregate.
14. What is table partitioning and what are the benefits of partitioning in an
InfoCube?
• It is the method of dividing a table which would enable a quick reference.
SAP uses fact file partitioning to improve performance. We can partition only
at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as
data is stored in the relevant partitions. Also table maintenance becomes
easier. Oracle,
Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL
Server, IBM DB2/400 do not support table portioning.
15. How many extra partitions are created and why?
• Two partitions are created for date before the begin date and after the end
date.
16. What are the options available in transfer rule?
• InfoObject
• Constant
• Routine
• Formula
17. How would you optimize the dimensions?
• We should define as many dimensions as possible and we have to take care that
no single dimension crosses more than 20% of the fact table size.
18. What are Conversion Routines for units and currencies in the update rule?
• Using this option we can write ABAP
code for Units / Currencies conversion. If we enable this flag then unit of Key
Figure appears in the ABAP code as an additional parameter. For example, we can
convert units in Pounds to Kilos.
19. Can an InfoObject be an InfoProvider, how and why?
• Yes, when we want to report on Characteristics or Master Data. We have to
right click on the InfoArea and select "Insert characteristic as data
target". For example, we can make 0CUSTOMER as an InfoProvider and report
on it.
20. What is Open Hub Service?
• The Open Hub Service enables us to distribute data from an SAP BW system into
external Data Marts, analytical applications, and other applications. We can
ensure controlled distribution using several systems. The central object for
exporting data is the InfoSpoke. We can define the source and the target object
for the data. BW becomes a hub of an enterprise data warehouse.
The distribution of data becomes clear through central monitoring from the
distribution status in the BW system.
21. How do you transform Open
Hub Data?
• Using BADI we can transform Open Hub Data according to the destination
requirement.
22. What is ODS?
• Operational DataSource is used for detailed storage of data. We can overwrite
data in the ODS. The data is stored in transparent tables.
23. What are BW Statistics and what is its use?
• They are group of Business Content InfoCubes which are used to measure
performance for Query and Load Monitoring. It also shows the usage of
aggregates, OLAP and Warehouse management
http://www.ittestpapers.com/articles/713/3/SAP-BW-Interview-Questions---Part-A/Page3.html
Communication Structure and Transfer
rules
• Create and InfoPackage
• Load Data
25. What are the delta options available when you load from flat file?
• The 3 options for Delta Management with Flat Files:
o Full Upload
o New Status for Changed records (ODS Object only)
o Additive Delta (ODS Object & InfoCube)
Q) Under which menu path is the Test Workbench to be found, including in
earlier Releases?
The menu path is: Tools - ABAP Workbench - Test - Test Workbench.
Q) I want to delete a BEx query that is in Production system through request. Is
anyone aware about it?
A) Have you tried the RSZDELETE transaction?
Q) Errors while monitoring process chains.
A) During data loading. Apart from them, in process chains you add so many
process types, for example after loading data into Info Cube, you rollup data
into aggregates, now this rolling up of data into aggregates is a process type
which you keep after the process type for loading data into Cube. This rolling
up into aggregates might fail.
Another one is after you load data into ODS, you activate ODS data (another
process type) this might also fail.
Q) In Monitor----- Details (Header/Status/Details) à Under Processing (data
packet): Everything OK à Context menu of Data Package 1 (1 Records): Everything
OK ---- Simulate update. (Here we can debug update rules or transfer rules.)
SM50 à Program/Mode à Program à Debugging & debug this work process.
Q) PSA Cleansing.
A) You know how to edit PSA. I don't think you can delete single records. You
have to delete entire PSA data for a request.
Q) Can we make a datasource to support delta.
A) If this is a custom (user-defined) datasource you can make the datasource
delta enabled. While creating datasource from RSO2, after entering datasource
name and pressing create, in the next screen there is one button at the top,
which says generic delta. If you want more details about this there is a
chapter in Extraction book, it's in last pages u find out.
Generic delta services: -
Supports delta extraction for generic extractors according to:
Time stamp
Calendar day
Numeric pointer, such as document number & counter
Only one of these attributes can be set as a delta attribute.
Delta extraction is supported for all generic extractors, such as tables/views,
SAP Query and function modules
The delta queue (RSA7) allows you to monitor the current status of the delta
attribute
Q) Workbooks, as a general rule, should be transported with the
role.
Here are a couple of scenarios:
1. If both the workbook and its role have been previously transported, then the
role does not need to be part of the transport.
2. If the role exists in both dev and the target system but the workbook has
never been transported, and then you have a choice of transporting the role
(recommended) or just the workbook. If only the workbook is transported, then
an additional step will have to be taken after import: Locate the WorkbookID
via Table RSRWBINDEXT (in Dev and verify the same exists in the target system)
and proceed to manually add it to the role in the target system via Transaction
Code PFCG -- ALWAYS use control c/control v copy/paste for manually adding!
3. If the role does not exist in the target system you should transport both
the role and workbook. Keep in mind that a workbook is an object unto itself
and has no dependencies on other objects. Thus, you do not receive an error
message from the transport of 'just a workbook' -- even though it may not be
visible, it will exist (verified via Table RSRWBINDEXT).
Overall, as a general rule, you should transport roles with workbooks.
Q) How much time does it take to extract 1 million (10 lackhs) of records into
an infocube?
A. This depends, if you have complex coding in update rules it will take longer
time, or else it will take less than 30 minutes.
Q) What are the five ASAP Methodologies?
A: Project plan, Business Blue print, Realization, Final preparation & Go-Live - support.
1. Project Preparation: In this phase, decision makers define clear project
objectives and an efficient decision making process ( i.e. Discussions with the
client, like what are his needs and requirements etc.). Project managers
will be involved in this phase (I guess).
A Project Charter is issued and an implementation strategy is outlined in this
phase.
2. Business Blueprint: It is a detailed documentation of your company's
requirements. (i.e. what are the objects we need to develop are modified
depending on the client's requirements).
3. Realization: In this only, the implementation of the project takes place (development
of objects etc) and we are involved in the project from here only.
4. Final Preparation: Final preparation before going live i.e. testing,
conducting pre-go-live, end user training etc.
End user training is given that is in the client site you train them how to
work with the new environment, as they are new to the technology.
5. Go-Live & support: The project has gone live and it is into production.
The Project team will be supporting the end users.
Q) What is landscape of R/3 & what is landscape of BW. Landscape of R/3 not
sure.
Then Landscape of b/w: u have the development system, testing system, production system
Development system: All the implementation part is done in this sys. (I.e.,
Analysis of objects developing, modification etc) and from here the objects are
transported to the testing system, but before transporting an initial test
known as Unit testing
(testing of objects) is done in the development sys.
Testing/Quality system: quality check is done in this system and integration
testing is done.
Production system: All the extraction part takes place in this sys.
Q) How do you measure the size of infocube?
A: In no of records.
Q). Difference between infocube and ODS?
A: Infocube is structured as star schema (extended) where a fact table is
surrounded by different dim table that are linked with DIM'ids. And the data
wise, you will have aggregated data in the cubes. No overwrite functionality
ODS is a flat structure (flat table) with no star schema concept and which will
have granular data (detailed level). Overwrite functionality.
Flat file
datasources does not support 0recordmode in extraction.
x before, -after, n new, a add, d delete, r reverse
Q) Difference between display attributes and navigational attributes?
A: Display attribute is one, which is used only for display purpose in the
report. Where as navigational attribute is used for drilling down in the
report. We don't need to maintain Navigational attribute in the cube as a
characteristic (that is the advantage) to drill down.
Q. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
A: But how is it possible? If you load it manually twice, then you can delete
it by requestID.
Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
Sure you can. ODS is nothing but a table.
Q. CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE?
A) Yes of course. For example, for loading text and hierarchies we use
different data sources but the same InfoSource.
Q. BRIEF THE DATAFLOW IN BW.
A) Data flows from transactional system to analytical system (BW). DataSources
on the transactional system needs to be replicated on BW side and attached to
infosource and update rules respectively.
Q. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER
RULES?
Q) WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
FULL and DELTA.
Q) AS WE USE Sbwnn, sbiw1, sbiw2 for delta update in LIS THEN
WHAT IS THE PROCEDURE IN LO-COCKPIT?
No LIS in LO cockpit. We will have datasources and can be maintained (append
fields). Refer white paper
on LO-Cockpit extractions.
Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?
A) Initially we don't delete the setup tables but when we do change in extract
structure we go for it. We r changing the extract structure right, that means
there are some newly added fields in that which r not before. So to get the
required data ( i.e.; the data which is required is taken and to avoid
redundancy) we delete n then fill the setup tables.
To refresh the statistical data.
The extraction set up reads the dataset that you want to process such as,
customers orders with the tables like VBAK, VBAP) & fills the relevant communication
structure with the data. The data is stored in cluster
tables from where it is read when the initialization is run. It is important
that during initialization phase, no one generates or modifies application
data, at least until the tables can be set up.
Q) SIGNIFICANCE of ODS?
It holds granular data (detailed level).
Q) WHERE THE PSA DATA IS STORED?
In PSA table.
Q) WHAT IS DATA SIZE?
The volume of data one data target holds (in no. of records)
Q) Different types of INFOCUBES.
Basic, Virtual (remote, sap remote and multi)
Virtual Cube is used for example, if you consider railways reservation all the
information has to be updated online. For designing the Virtual cube you have
to write the function module that is linking to table, Virtual cube it is like
a the structure, when ever the table is updated the virtual cube will fetch the
data from table and display report Online... FYI.. you will get the information
: https://www.sdn.sap.com/sdn
/index.sdn and search for Designing Virtual Cube and you will get
a good material designing the Function Module
Q) INFOSET QUERY.
Can be made of ODS's and Characteristic InfoObjects with masterdata.
Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
In R/3 or in BW? 2 in R/3 and 2 in BW
Q) ROUTINES?
Exist in the InfoObject, transfer routines, update routines and start routine
Q) BRIEF SOME STRUCTURES USED IN BEX.
Rows and Columns, you can create structures.
Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes &
Characteristic values.
Variable Types are
Manual entry /default value
Replacement path
SAP exit
Customer exit
Authorization
Q) HOW MANY LEVELS YOU CAN GO IN REPORTING?
You can drill down to any level by using Navigational attributes and jump
targets.
Q) WHAT ARE INDEXES?
Indexes are data base indexes, which help in retrieving data fastly.
Q) DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.
Help! Refer documentation
Q) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED?
No.
Q) WHAT IS THE SIGNIFICANCE OF KPI'S?
KPI's indicate the performance of a company. These are key figures
Q) AFTER THE DATA EXTRACTION
WHAT IS THE IMAGE POSITION.
After image (correct me if I am wrong)
Q) REPORTING AND RESTRICTIONS.
Help! Refer documentation.
Q) TOOLS USED FOR PERFORMANCE TUNING.
ST22, Number ranges, delete indexes before load. Etc
Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA DAILY.
There should be some tool to run the job daily (SM37 jobs)
Q) AUTHORIZATIONS.
Profile generator
Q) WEB REPORTING.
What are you expecting??
Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.
Of course
Q) PROCEDURES OF REPORTING ON MULTICUBES
Refer help. What are you expecting? MultiCube works on Union condition
Q) EXPLAIN TRANPSORTATION OF OBJECTS?
Dev---àQ and Dev-------àP
Q) What types of partitioning are there for BW?
There are two Partitioning Performance aspects for BW (Cube & PSA)
Query Data Retrieval
Performance Improvement:
Partitioning by (say) Date Range improves data retrieval by making best use of
database [data range] execution plans and indexes (of say Oracle database engine).
B) Transactional Load Partitioning Improvement:
Partitioning based on expected load volumes and data element sizes. Improves
data loading into PSA and Cubes by infopackages (Eg. without timeouts).
Q) How can I compare data in R/3 with data in a BW Cube after the daily delta
loads? Are there any standard procedures for checking them or matching the
number of records?
A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the
number of records extracted. Then go to BW Monitor to check the number of
records in the PSA and check to see if it is the same & also in the monitor
header tab.
A) RSA3 is a simple extractor checker program that allows you to rule out
extracts problems in R/3. It is simple to use, but only really tells you if the
extractor works. Since records that get updated into Cubes/ODS structures are
controlled by Update Rules, you will not be able to determine what is in the
Cube compared to what is in the R/3 environment. You will need to compare
records on a 1:1 basis against records in R/3 transactions for the functional
area in question. I would recommend enlisting the help of the end user community
to assist since they presumably know the data.
To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute
and you will see the record count, you can also go to display that data. You
are not modifying anything so what you do in RSA3 has no effect on data quality
afterwards. However, it will not tell you how many records should be expected
in BW for a given load. You have that information in the monitor RSMO during
and after data loads. From RSMO for a given load you can determine how many
records were passed through the transfer rules from R/3, how many targets were
updated, and how many records passed through the Update Rules. It also gives
you error messages from the PSA.
Q) Types of Transfer Rules?
A) Field to Field mapping, Constant, Variable & routine.
Q) Types of Update Rules?
A) (Check box), Return table
Q) Transfer Routine?
A) Routines, which we write in, transfer rules.
Q) Update Routine?
A) Routines, which we write in Update rules
Q) What is the difference between writing a routine in transfer rules and
writing a routine in update rules?
A) If you are using the same InfoSource to update data in more than one data
target its better u write in transfer rules because u can assign one InfoSource
to more than one data target & and what ever logic u write in update rules
it is specific to particular one data target.
Q) Routine with Return Table.
A) Update rules generally only have one return value. However, you can create a
routine in the tab strip key figure calculation, by choosing checkbox Return
table. The corresponding key figure routine then no longer has a return value,
but a return table. You can then generate as many key figure values, as you
like from one data record.
Q) Start routines?
A) Start routines u can write in both updates rules and transfer rules, suppose
you want to restrict (delete) some records based on conditions before getting
loaded into data targets, then you can specify this in update rules-start
routine.
Ex: - Delete Data_Package ani ante it will delete a record based on the
condition
Q) X & Y Tables?
X-table = A table to link material SIDs with SIDs for time-independent
navigation attributes.
Y-table = A table to link material SIDs with SIDS for time-dependent navigation
attributes.
There are four types of sid tables
X time independent navigational attributes sid tables
Y time dependent navigational attributes sid tables
H hierarchy sid tables
I hierarchy structure sid tables
Q) Filters & Restricted Key figures (real time example)
Restricted KF's u can have for an SD cube: billed quantity, billing value, no:
of billing documents as RKF's.
Q) Line-Item Dimension (give me an real time example)
Line-Item Dimension: Invoice no: or Doc no: is a real time example
Q) What does the number in the 'Total' column in Transaction RSA7 mean?
A) The 'Total' column displays the number of LUWs that were written in the
delta queue and that have not yet been confirmed. The number includes the LUWs
of the last delta request (for repetition of a delta request) and the LUWs for
the next delta request. A LUW only disappears from the RSA7 display when it has
been transferred to the BW System and a new delta request has been received
from the BW System.
Q) How to know in which table (SAP BW) contains Technical Name / Description
and creation data of a particular Reports. Reports that are created using BEx
Analyzer.
A) There is no such table in BW if you want to know such details while you are
opening a particular query press properties button you will come to know all
the details that you wanted.
You will find your information about technical names and description about
queries in the following tables. Directory of all reports (Table RSRREPDIR) and
Directory of the reporting component elements (Table RSZELTDIR) for workbooks
and the connections to queries check Where- used list for reports in workbooks
(Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table
RSRWBINDEXT)
Q) What is a LUW in the delta queue?
A) A LUW from the point of view of the delta queue can be an individual
document, a group of documents from a collective run or a whole data packet of
an application
extractor.
Q) Why does the number in the 'Total' column in the overview screen of
Transaction RSA7 differ from the number of data records that is displayed when
you call the detail view?
A) The number on the overview screen corresponds to the total of LUWs (see also
first question) that were written to the qRFC queue and that have not yet been
confirmed. The detail screen displays the records contained in the LUWs. Both,
the records belonging to the previous delta request and the records that do not
meet the selection conditions of the preceding delta init requests are filtered
out. Thus, only the records that are ready for the next delta request are
displayed on the detail screen. In the detail screen of Transaction RSA7, a
possibly existing customer exit is not taken into account.
Q) Why does Transaction RSA7 still display LUWs on the overview screen after
successful delta loading?
A) Only when a new delta has been requested does the source system learn that
the previous delta was successfully loaded to the BW System. Then, the LUWs of
the previous delta may be confirmed (and also deleted). In the meantime, the
LUWs must be kept for a possible delta request repetition. In particular, the
number on the overview screen does not change when the first delta was loaded
to the BW System.
Q) Why are selections not taken into account when the delta queue is filled?
A) Filtering according to selections takes place when the system reads from the
delta queue. This is necessary for reasons of performance.
Q) Why is there a DataSource with '0' records in RSA7 if delta exists and has
also been loaded successfully?
It is most likely that this is a DataSource that does not send delta data to
the BW System via the delta queue but directly via the extractor (delta for
master data using ALE change pointers). Such a DataSource should not be
displayed in RSA7. This error is corrected with BW 2.0B Support Package 11.
Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the
loading procedure from the delta queue?
A) The impact is limited. If performance problems are related to the loading
process from the delta queue, then refer to the application-specific notes (for
example in the CO-PA area, in the logistics cockpit area and so on).
Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as
effective for the delta queue as for a full update. Please note, however, that
LUWs are not split during data loading for consistency reasons. This means that
when very large LUWs are written to the DeltaQueue, the actual package size may
differ considerably from the MAXSIZE and MAXLINES parameters.
Q) Why does it take so long to display the data in the delta queue (for example
approximately 2 hours)?
A) With Plug In 2001.1 the display was changed: the user has the option of
defining the amount of data to be displayed, to restrict it, to selectively
choose the number of a data record, to make a distinction between the 'actual'
delta data and the data intended for repetition and so on.
Q) What is the purpose of function 'Delete data and meta data in a queue' in
RSA7? What exactly is deleted?
A) You should act with extreme caution when you use the deletion function in
the delta queue. It is comparable to deleting an InitDelta in the BW System and
should preferably be executed there. You do not only delete all data of this
DataSource for the affected BW System, but also lose the entire information
concerning the delta initialization. Then you can only request new deltas after
another delta initialization.
When you delete the data, the LUWs kept in the qRFC queue for the corresponding
target system are confirmed. Physical deletion only takes place in the qRFC
outbound queue if there are no more references to the LUWs.
The deletion function is for example intended for a case where the BW System,
from which the delta initialization was originally executed, no longer exists
or can no longer be accessed.
Q) Why does it take so long to delete from the delta queue (for example half a
day)?
A) Import PlugIn 2000.2 patch 3. With this patch the performance during
deletion is considerably improved.
Q) Why is the delta queue not updated when you start the V3 update in the
logistics cockpit area?
A) It is most likely that a delta initialization had not yet run or that the
delta initialization was not successful. A successful delta initialization (the
corresponding request must have QM status 'green' in the BW System) is a
prerequisite for the application data being written in the delta queue.
Q) What is the relationship between RSA7 and the qRFC monitor (Transaction
SMQ1)?
A) The qRFC monitor basically displays the same data as RSA7. The internal
queue name must be used for selection on the initial screen of the qRFC
monitor. This is made up of the prefix 'BW, the client and the short name of
the DataSource. For DataSources whose name are 19 characters long or shorter,
the short name corresponds to the name of the DataSource. For DataSources whose
name is longer than 19 characters (for delta-capable DataSources only possible
as of PlugIn 2001.1) the short name is assigned in table ROOSSHORTN.
In the qRFC monitor you cannot distinguish between repeatable and new LUWs.
Moreover, the data of a LUW is displayed in an unstructured manner there.
Q) Why are the data in the delta queue although the V3 update was not started?
A) Data was posted in background. Then, the records are updated directly in the
delta queue (RSA7). This happens in particular during automatic goods receipt
posting (MRRS). There is no duplicate transfer of records to the BW system. See
Note 417189.
Q) Why does button 'Repeatable' on the RSA7 data details screen not only show
data loaded into BW during the last delta but also data that were newly added,
i.e. 'pure' delta records?
A) Was programmed in a way that the request in repeat mode fetches both
actually repeatable (old) data and new data from the source system.
Q) I loaded several delta inits with various selections. For which one is the
delta loaded?
A) For delta, all selections made via delta inits are summed up. This means, a
delta for the 'total' of all delta initializations is loaded.
Q) How many selections for delta inits are possible in the system?
A) With simple selections (intervals without complicated join conditions or
single values), you can make up to about 100 delta inits. It should not be
more.
With complicated selection conditions, it should be only up to 10-20 delta
inits.
Reason: With many selection conditions that are joined in a complicated way,
too many 'where' lines are generated in the generated ABAP
source code that may exceed the memory limit.
Q) I intend to copy the source system, i.e. make a client copy. What will
happen with may delta? Should I initialize again after that?
A) Before you copy a source client or source system, make sure that your deltas
have been fetched from the DeltaQueue into BW and that no delta is pending.
After the client copy, an inconsistency might occur between BW delta tables and
the OLTP delta tables as described in Note 405943. After the client copy, Table
ROOSPRMSC will probably be empty in the OLTP since this table is
client-independent. After the system copy, the table will contain the entries
with the old logical system name that are no longer useful for further delta
loading from the new logical system. The delta must be initialized in any case
since delta depends on both the BW system and the source system. Even if no
dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage, you
should expect that the delta have to be initialized after the copy.
Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of
processes?
A) Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW
queues only after informing the BW Support or only if this is explicitly
requested in a note for component 'BC-BW' or 'BW-WHM-SAPI'.
Q) Despite of the delta request being started after completion of the
collective run (V3 update), it does not contain all documents. Only another
delta request loads the missing documents into BW. What is the cause for this
"splitting"?
A) The collective run submits the open V2 documents for processing to the task
handler, which processes them in one or several parallel update processes in an
asynchronous way. For this reason, plan a sufficiently large "safety time
window" between the end of the collective run in the source system and the
start of the delta request in BW. An alternative solution where this problem
does not occur is described in Note 505700.
Q) Despite my deleting the delta init, LUWs are still written into the
DeltaQueue?
A) In general, delta initializations and deletions of delta inits should always
be carried out at a time when no posting takes place. Otherwise, buffer
problems may occur: If a user started the internal mode at a time when the
delta initialization was still active, he/she posts data into the queue even
though the initialization had been deleted in the meantime. This is the case in
your system.
Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT, some
entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What
do these statuses mean? Which values in the field 'Status' mean what and which
values are correct and which are alarming? Are the statuses BW-specific or
generally valid in qRFC?
A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read
once either in a delta request or in a repetition of the delta request.
However, this does not mean that the record has successfully reached the BW
yet. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that
the record has been written into the DeltaQueue and will be loaded into the BW
with the next delta request or a repetition of a delta. In any case only the
statuses READ, READY and RECORDED in both tables are considered to be valid.
The status EXECUTED in TRFCQOUT can occur temporarily. It is set before
starting a DeltaExtraction for all records with status READ present at that
time. The records with status EXECUTED are usually deleted from the queue in
packages within a delta request directly after setting the status before
extracting a new delta. If you see such records, it means that either a process
which is confirming and deleting records which have been loaded into the BW is
successfully running at the moment, or, if the records remain in the table for
a longer period of time with status EXECUTED, it is likely that there are
problems with deleting the records which have already been successfully been
loaded into the BW. In this state, no more deltas are loaded into the BW. Every
other status is an indicator for an error or an inconsistency. NOSEND in SMQ1
means nothing (see note 378903).
The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting.
Q) The extract structure was changed when the DeltaQueue was empty. Afterwards
new delta records were written to the DeltaQueue. When loading the delta into
the PSA, it shows that some fields were moved. The same result occurs when the
contents of the DeltaQueue are listed via the detail display. Why are the data
displayed differently? What can be done?
Make sure that the change of the extract structure is also reflected in the
database and that all servers are synchronized. We recommend to reset the
buffers using Transaction $SYNC. If the extract structure change is not
communicated synchronously to the server where delta records are being created,
the records are written with the old structure until the new structure has been
generated. This may have disastrous consequences for the delta.
When the problem occurs, the delta needs to be re-initialized.
Q) How and where can I control whether a repeat delta is requested?
A) Via the status of the last delta in the BW Request Monitor. If the request
is RED, the next load will be of type 'Repeat'. If you need to repeat the last
load for certain reasons, set the request in the monitor to red manually. For
the contents of the repeat see Question 14. Delta requests set to red despite
of data being already updated lead to duplicate records in a subsequent repeat,
if they have not been deleted from the data targets concerned before.
Q) As of PI 2003.1, the Logistic Cockpit offers various types of update
methods. Which update method is recommended in logistics? According to which
criteria should the decision be made? How can I choose an update method in
logistics?
See the recommendation in Note 505700.
Q) Are there particular recommendations regarding the data volume the
DeltaQueue may grow to without facing the danger of a read failure due to
memory problems?
A) There is no strict limit (except for the restricted number range of the
24-digit QCOUNT counter in the LUW management table - which is of no practical
importance, however - or the restrictions regarding the volume and number of
records in a database table).
When estimating "smooth" limits, both the number of LUWs is important
and the average data volume per LUW. As a rule, we recommend to bundle data
(usually documents) already when writing to the DeltaQueue to keep number of
LUWs small (partly this can be set in the applications, e.g. in the Logistics
Cockpit). The data volume of a single LUW should not be considerably larger
than 10% of the memory available to the work process for data extraction
(in a 32-bit architecture with a memory volume of about 1GByte per work
process, 100 Mbytes per LUW should not be exceeded). That limit is of rather
small practical importance as well since a comparable limit already applies
when writing to the DeltaQueue. If the limit is observed, correct reading is
guaranteed in most cases.
If the number of LUWs cannot be reduced by bundling application transactions,
you should at least make sure that the data are fetched from all connected BWs
as quickly as possible. But for other, BW-specific, reasons, the frequency
should not be higher than one DeltaRequest per hour.
To avoid memory problems, a program-internal limit ensures that never more than
1 million LUWs are read and fetched from the database per DeltaRequest. If this
limit is reached within a request, the DeltaQueue must be emptied by several
successive DeltaRequests. We recommend, however, to try not to reach that limit
but trigger the fetching of data from the connected BWs already when the number
of LUWs reaches a 5-digit value.
Q) I would like to display the date the data was uploaded on the
report. Usually, we load the transactional data nightly. Is there any easy way
to include this information on the report for users? So that they know the
validity of the report.
A) If I understand your requirement correctly, you want to display the date on
which data was loaded into the data target from which the report is being
executed. If it is so, configure your workbook to display the text elements in
the report. This displays the relevance of data field, which is the date on which
the data load has taken place.
Q) Can we filter the fields at Transfer Structure?
Q) Can we load data directly into infoobject with out extraction is it
possible.
Yes. We can copy from other infoobject if it is same. We load data from PSA if
it is already in PSA.
Q) HOW MANY DAYS CAN WE KEEP THE DATA IN PSA, IF WE R SHEDULED DAILY, WEEKLY
AND MONTHLY.
a) We can set the time.
Q) HOW CAN U GET THE DATA FROM CLIENT IF U R WORKING ON OFFSHORE PROJECTS.
THROUGH WHICH NETWORK.
a) VPN…………….Virtual
Private Network, VPN is nothing but one sort of network
where we can connect to the client systems sitting in offshore through RAS
(Remote access server).
Q) HOW CAN U ANALIZE THE PROJECT AT FIRST?
Prepare Project Plan and Environment
Define Project Management
Standards and
Procedures
Define Implementation Standards and Procedures
Testing & Go-live + supporting.
Q) THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL CUBES IF ONE
CUBE GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR?
Go to TCode sm66 then see which one is locked select that pid from there and
goto sm12
TCode then unlock it this is happened when lock errors are occurred when u
scheduled.
Q) Can anybody tell me how to add a navigational attribute in the BEx report in
the rows?
A) Expand dimension under left side panel (that is infocube panel) select than
navigational attributes drag and drop under rows panel.
Q) IF ANY TRASACTION CODE LIKE SMPT OR STMT.
In current systems (BW 3.0B and R/3 4.6B) these Tcodes don't exist!
Q) WHAT IS TRANSACTIONAL CUBE?
A) Transactional InfoCubes differ from standard InfoCubes in that the former
have an improved write access performance level. Standard InfoCubes are
technically optimized for read-only access and for a comparatively small number
of simultaneous accesses. Instead, the transactional InfoCube was developed to
meet the demands of SAP Strategic Enterprise Management (SEM), meaning that,
data is written to the InfoCube (possibly by several users at the same time)
and re-read as soon as possible. Standard Basic cubes are not suitable for
this.
Q) Is there any way to delete cube contents within update rules from an ODS
data source? The reason for this would be to delete (or zero out) a cube record
in an "Open Order" cube if the open order quantity was 0.
I've tried using the 0recordmode but that doesn't work. Also, would it
be easier to write a program that would be run after the load and delete
the records with a zero open qty?
A) START routine for update rules u can write ABAP code.
A) Yap, you can do it. Create a start routine in Update rule.
It is not "Deleting cube contents with update rules" It is only
possible to avoid that some content is updated into the InfoCube using the
start routine. Loop at all the records and delete the record that has the
condition. "If the open order quantity was 0" You have to think also
in before and after images in case of a delta upload. In that case you may
delete the change record and keep the old and after the change the wrong
information.
Q) I am not able to access a node in hierarchy directly using variables for
reports. When I am using Tcode RSZV it is giving a message that it doesn't
exist in BW 3.0 and it is embedded in BEx. Can any one tell me the other
options to get the same functionality in BEx?
A) Tcode RSZV is used in the earlier version of 3.0B only. From 3.0B onwards,
it's possible in the Query Designer (BEx) itself. Just right click on the
InfoObject for which you want to use as variables and precede further selecting
variable type and proce -
Hello Everybody ,
iam having interview for bi/bw support consultant and interview specs consists of data management techniques,improving and maintaining sap bi monitoring capabilities.solutions to support issues and understanding of BCC SAP solution and how its bw/bi configuration support the business.knowledge of wad.please send me the expected questions and answers though iam searching sdn using specs.
Regards
PriyaHi priya
Here are some Q&A.
Normally the production support activities include
Scheduling
R/3 Job Monitoring
B/W Job Monitoring
Taking corrective action for failed data loads.
Working on some tickets with small changes in reports or in AWB objects.
The activities in a typical Production Support would be as follows:
1. Data Loading - could be using process chains or manual loads.
2. Resolving urgent user issues - helpline activities
3. Modifying BW reports as per the need of the user.
4. Creating aggregates in Prod system
5. Regression testing when version/patch upgrade is done.
6. Creating adhoc hierarchies.
we can perform the daily activities in Production
1. Monitoring Data load failures thru RSMO
2. Monitoring Process Chains Daily/weekly/monthly
3. Perform Change run Hierarchy
4. Check Aggr's Rollup
To add to the above
1)check data targets are ready for reporting,
2) No failed or cancelled jobs in sm37 monitors and Bw Monitor.
3) All requests are loaded for day, monthly and yearly also.
4) Also to note down time taken for loading of critical info cubes which are used for reporting.
5) Is there any break in any schedules from your process chains.
As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
a) Loads can be failed due to the invalid characters
b) Can be because of the deadlock in the system
c) Can be because of previous load failure , if the load is dependant on other loads
d) Can be because of erroneous records
e) Can be because of RFC connections
These are some of the reasons for the load failures.
Why there is frequent load failures during extractions? and how to analyse them?
If these failures are related to Data, there might be data inconsistency in source system. Though we are handling properly in transfer rules. We can monitor these issues in T-code -> RSMO and PSA (failed records) and update.
If we are talking about whole extraction process, there might be issues of work process scheduling and IDoc transfer to target system from source system. These issues can be re-initiated by canceling that specific data load and ( usually by changing Request color from Yellow - > Red in RSMO). and restart the extraction.
What is the daily task we do in production support.How many times we will extract the data at what times.
It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in number of records and kind of transfer rules you have provided. If transfer rules have some kind of round about transfer rules and updates rules has calculations for customized key figures... long times are expected..
Usually You need to work on RSMO and see what records are failing.. and update from PSA.
What are some of the frequent failures and errors?
As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
a) Loads can be failed due to the invalid characters
b) Can be because of the deadlock in the system
c) Can be because of previous load failure , if the load is dependant on other loads
d) Can be because of erroneous records
e) Can be because of RFC connections
These are some of the reasons for the load failures.
for Rfc connections:
We use SM59 for creating RFC destinations
Some questions
1) RFC connection lost.
A) We can check out in the SM59 t-code
RFC Des
+ R/3 conn
CRD client (our r/3 client)
double click..test connection in menu
2) Invalid characters while loading.
A) Change them in the PSA & load them.
3) ALEREMOTE user is locked.
A) Ask your Basis team to release the user. It is mostly ALEREMOTE.
2) Password Changed
3) Number of incorrect attempts to login into ALEREMOTE.
4) USE SM12 t-code to find out are there any locks.
4) Lower case letters not allowed.
A) Uncheck the lower case letters check box under "general" tab in the info object.
5) While loading the data i am getting messeage that 'Record
the field mentioned in the errror message is not mapped to any infoboject in the transfer rule.
6) object locked.
A) It might be locked by some other process or a user. Also check for authorizations
7) "Non-updated Idocs found in Source System".
8) While loading master data, one of the datapackage has a red light error message:
Master data/text of characteristic ZCUSTSAL already deleted .
9) extraction job aborted in r3
A) It might have got cancelled due to running for more than the expected time, or may be cancelled by R/3 users if it is hampering the performance.
10) request couldnt be activated because there is another request in the psa with a smaller sid
A)
11) repeat of last delta not possible
12) datasource not replicated
A) Replicate the datasource from R/3 through source system in the AWB & assign it to the infosource and activate it again.
13) datasource/transfer structure not active.
A) Use the function module RS_TRANSTRU_ACTIVATE_ALL to activate it
14) ODS activation error.
A) ODS activation errors can occur mainly due to following reasons-
1.Invalid characters (# like characters)
2.Invalid data values for units/currencies etc
3.Invalid values for data types of char & key figures.
4.Error in generating SID values for some data.
15. conversio routine error
solution.check the data format in source
16.OBJECT CANOOT BE ACTIVATED.or error when activating object
check the consistency of the object.
17.no data found.(in query)
check the info provider wether data is there or not and delete unsucessful request.
18.error generating or activating update rules.
1. What are the extractor types?
Application Specific
o BW Content FI, HR, CO, SAP CRM, LO Cockpit
o Customer-Generated Extractors
LIS, FI-SL, CO-PA
Cross Application (Generic Extractors)
o DB View, InfoSet, Function Module
2. What are the steps involved in LO Extraction?
The steps are:
o RSA5 Select the DataSources
o LBWE Maintain DataSources and Activate Extract Structures
o LBWG Delete Setup Tables
o 0LI*BW Setup tables
o RSA3 Check extraction and the data in Setup tables
o LBWQ Check the extraction queue
o LBWF Log for LO Extract Structures
o RSA7 BW Delta Queue Monitor
3. How to create a connection with LIS InfoStructures?
LBW0 Connecting LIS InfoStructures to BW
4. What is the difference between ODS and InfoCube and MultiProvider?
ODS: Provides granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.
CUBE: Follows the star schema, we can only append data, ideal for primary reporting.
MultiProvider: Does not have physical data. It allows to access data from different InfoProviders (Cube, ODS, InfoObject). It is also preferred for reporting.
5. What are Start routines, Transfer routines and Update routines?
Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine.
Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.
6. What is the difference between start routine and update routine, when, how and why are they called?
Start routine can be used to access InfoPackage while update routines are used while updating the Data Targets.
7. What is the table that is used in start routines?
Always the table structure will be the structure of an ODS or InfoCube. For example if it is an ODS then active table structure will be the table.
8. Explain how you used Start routines in your project?
Start routines are used for mass processing of records. In start routine all the records of DataPackage is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.
9. What are Return Tables?
When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a Cost Center, using a return table we can get expense per employee.
10. How do start routine and return table synchronize with each other?
Return table is used to return the Value following the execution of start routine
11. What is the difference between V1, V2 and V3 updates?
V1 Update: It is a Synchronous update. Here the Statistics update is carried out at the same time as the document update (in the application tables).
V2 Update: It is an Asynchronous update. Statistics update and the Document update take place as different tasks.
o V1 & V2 dont need scheduling.
Serialized V3 Update: The V3 collective update must be scheduled as a job (via LBWE). Here, document data is collected in the order it was created and transferred into the BW as a batch job. The transfer sequence may not be the same as the order in which the data was created in all scenarios. V3 update only processes the update data that is successfully processed with the V2 update.
12. What is compression?
It is a process used to delete the Request IDs and this saves space.
13. What is Rollup?
This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.
14. What is table partitioning and what are the benefits of partitioning in an InfoCube?
It is the method of dividing a table which would enable a quick reference. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table portioning.
15. How many extra partitions are created and why?
Two partitions are created for date before the begin date and after the end date.
16. What are the options available in transfer rule?
InfoObject
Constant
Routine
Formula
17. How would you optimize the dimensions?
We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size.
18. What are Conversion Routines for units and currencies in the update rule?
Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos.
19. Can an InfoObject be an InfoProvider, how and why?
Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select Insert characteristic as data target. For example, we can make 0CUSTOMER as an InfoProvider and report on it.
20. What is Open Hub Service?
The Open Hub Service enables us to distribute data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the InfoSpoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.
21. How do you transform Open Hub Data?
Using BADI we can transform Open Hub Data according to the destination requirement.
22. What is ODS?
Operational DataSource is used for detailed storage of data. We can overwrite data in the ODS. The data is stored in transparent tables.
23. What are BW Statistics and what is its use?
They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management.
24. What are the steps to extract data from R/3?
Replicate DataSources
Assign InfoSources
Maintain Communication Structure and Transfer rules
Create and InfoPackage
Load Data
25. What are the delta options available when you load from flat file?
The 3 options for Delta Management with Flat Files:
o Full Upload
o New Status for Changed records (ODS Object only)
o Additive Delta (ODS Object & InfoCube)
SAP BW Interview Questions 2
1) What is process chain? How many types are there? How many we use in real time scenario? Can we define interdependent processes with tasks like data loading, cube compression, index maintenance, master data & ods activation in the best possible performance & data integrity.
2) What is data integrityand how can we achieve this?
3) What is index maintenance and what is the purpose to use this in real time?
4) When and why use infocube compression in real time?
5) What is mean by data modelling and what will the consultant do in data modelling?
6) How can enhance business content and what for purpose we enhance business content (becausing we can activate business content)
7) What is fine-tuning and how many types are there and what for purpose we done tuning in real time. tuning can only be done for infocube partitions and creating aggregates or any other?
8) What is mean by multiprovider and what purpose we use multiprovider?
9) What is scheduled and monitored data loads and for what purpose?
Ans # 1:
Process chains exists in Admin Work Bench. Using these we can automate ETTL processes. These allows BW guys to schedule all activities and monitor (T Code: RSPC).
PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in any given process chain. Is a procedure either with in the SAP or external to it with a start and end. This process runs in the background.
PROCESS CHAIN is set of such processes that are linked together in a chain. In other words each process is dependent on the previous process and dependencies are clearly defined in the process chain.
This is normally done in order to automate a job or task that has to execute more than one process in order to complete the job or task.
1. Check the Source System for that particular PC.
2. Select the request ID (it will be in Header Tab) of PC
3. Go to SM37 of Source System.
4. Double Click on the Job.
5. You will navigate to a screen
6. In that Click "Job Details" button
7. A small Pop-up Window comes
8. In the Pop-up screen, take a note of
a) Executing Server
b) WP Number/PID
9. Open a new SM37 (/OSM37) command
10. In the Click on "Application Servers" button
11. You can see different Application Servers.
11. Goto Executing server, and Double Click (Point 8 (a))
12. Goto PID (Point 8 (b))
13. On the left most you can see a check box
14. "Check" the check Box
15. On the Menu Bar.. You can see "Process"
16. In the "process" you have the Option "Cancel with Core"
17. Click on that option. * -- Ramkumar K
Ans # 2:
Data Integrity is about eliminating duplicate entries in the database and achieve normalization.
Ans # 4:
InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
This compression can be done through Process Chain and also manually.
Tips by: Anand
Ans#3
Indexing is a process where the data is stored by indexing it. Eg: A phone book... When we write somebodys number we write it as Prasads number would be in "P" and Rajesh's number would be in "R"... The phone book process is indexing.. similarly the storing of data by creating indexes is called indexing.
Ans#5
Datamodeling is a process where you collect the facts..the attributes associated to facts.. navigation atributes etc.. and after you collect all these you need to decide which one you ill be using. This process of collection is done by interviewing the end users, the power users, the share holders etc.. it is generally done by the Team Lead, Project Manager or sometimes a Sr. Consultant (4-5 yrs of exp) So if you are new you dont have to worry about it....But do remember that it is a imp aspect of any datawarehousing soln.. so make sure that you have read datamodeling before attending any interview or even starting to work....
Ans#6
We can enhance the Business Content bby adding fields to it. Since BC is delivered by SAP Inc it may not contain all the infoobjects, infocubes etc that you want to use according to your company's data model... eg: you have a customer infocube(In BC) but your company uses a attribute for say..apt number... then instead of constructing the whole infocube you can add the above field to the existing BC infocube and get going...
Ans#7
Tuning is the most imp process in BW..Tuning is done the increase efficiency.... that means lowering time for loading data in cube.. lowering time for accessing a query.. lowering time for doing a drill down etc.. fine tuning=lowering time(for everything possible)...tuning can be done by many things not only by partitions and aggregates there are various things you can do... for eg: compression, etc..
Ans#8
Multiprovider can combine various infoproviders for reporting purposes.. like you can combine 4-5 infocubes or 2-3 infocubes and 2-3 ODS or IC, ODS and Master data.. etc.. you can refer to help.sap.com for more info...
Ans#9
Scheduled data load means you have scheduled the loading of data for some particular date and time you can do it in scheduler tab if infoobject... and monitored means you are monitoring that particular data load or some other loads by using transaction RSMON.
1.Procedure for repeat delta?
You need to make the request status to Red in monitor screen and then delete it from ODS/Cube. Then when you open infopackage again, system will prompt you for repeat delta.
also.....
Goto RSA7->F2->Update Mode--->Delta Repetation
Delta repeation is done based on type of upload you are carrying on.
1. if you are loading masterdata then most of the time you will change the QM status to red and then repeat the delta for the repeat of delta. the delta is allowed only if you make the changes.
and some times you need to do the RnD if the repeat of delta is not allowed even after the qm status id made to red. here you have to change the QM status to red.
If this is not the case, the source system and therefore also the extractor, have not yet received any information regarding the last delta and you must set the request to GREEN in the monitor using a QM action.
The system then requests a delta again since the last delta request has not yet occurred for the extractor.
Afterwards, you must reset the old request that you previously set to GREEN to RED since it was incorrect and it would otherwise be requested as a data target by an ODS.
Caution: If the termianted request was a REPEAT request itself, always set this to RED so that the system tries to carry out a repeat again.
To determine whether a delta or a repeat are to be requested, the system ONLY uses the status of the monitor.
It is irrelevant whether the request is updated in a data target somewhere.
When activating requests in an ODS, the system checks delta repeat requests for completeness and the correct sequence.
Each green delta/repeat request in the monitor that came from the same DataSource/source system combination must be updated in the ODS before activation, which means that in this case, you must set them back to RED in the monitor using a QM action when using the solution described above.
If the source of the data is a DataMart, it is not just the DELTARNR field that is relevant (in the roosprmsc table in the system in which the source DataMart is, which is usually your BW system since it is a Myself extraction in this case), rather the status of the request tabstrip control is relevant as well.
Therefore, after the last delta request has terminated, go to the administration of your data source and check whether the DataMart indicator is set for the request that you wanted to update last.
If this is NOT the case, you must NOT request a repeat since the system would also retransfer the data of the last delta but one.
This means, you must NOT start a delta InfoPackage which then would request a repeat because the monitor is still RED. For information about how to correct this problem, refer to the following section.
For more information about this, see also Note 873401.
Proceed as follows:
Delete the rest of this request from ALL updated data targets, set the terminated request to GREEN IN THE MONITOR and request a new DELTA.
Only if the DataMart indicator is set does the system carry out a repeat correctly and transfers only this data again.
This means, that only in this case can you leave the monitor status as it is and restart the delta InfoPackage. Then this creates a repeat request
In addition, you can generally also reset the DATAMART indicator and then work using a delta request after you have set the incorrect request to GREEN in the monitor.
Simply start the delta InfoPackage after you have reset the DATAMART indicator AND after you have set the last request that was terminated to GREEN in the monitor.
After the delta request has been carried out successfully, remember to reset the old incorrect request to RED since otherwise the problems mentioned above will occur when you activate the data in a target ODS.
What is process chain and how you used it?
A) Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
B) In one of our scenario we wanted to upload wholesale price infoobject which will have wholesale price for all the material. Then we wanted to load transaction data. While loading transaction data to populate wholesale price, there was a look up in the update rule on this InfoObject masterdata table. This dependency of first uploading masterdata and then uploading transaction data was done through the process chain.
What is process chain and how you used it?
A) We have used process chains to automate the delta loading process. Once you are finished with your design and testing you can automate the processes listed in RSPC. I have a real time example in the attachment.
1. What is process chain and how you used it?
Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
2. What is transaction for creating Process Chains ?
RSPC .
3. Explain Colector Process ?
Collector processes are used to manage multiple predecessor
processes that feed into the same subsequent process. The collector
processes available for BW are:
AND :
All of the direct predecessor processes must raise an event in order for subsequent processes to be executed
OR :
A least one predecessor process must send an event The first predecessor process that sends an event triggers the subsequent process
Any additional predecessor processes that send an event will again trigger
subsequent process (Only if the chain is planned as periodic)
EXOR : Exclusive OR
Similar to regular OR, but there is only ONE execution of the successor
processes, even if several predecessor processes raise an event
4. What are application Process ?
Application processes represent BW activities that are typically
performed as part of BW operations.
Examples include:
Data load
Attribute/Hierarchy Change run
Aggregate rollup
Reporting Agent Settings
5. Tell some facts about process Chains
Process chains are transportable Button for writing to a change request when
maintaining a process chain in RSPC
Process chains available in the transport connection wizard (administrator workbench)
If a process dumps, it is treated in the same manner as a failed process
Graphical display of Process Chain Maintenance requires the 620 SAPGUI and SAP BW 3.0B Frontend GUI
A special control background job runs to facilitate the execution of the of the other batch jobs of the process chain
Note your BTC process distribution, and make sure that an extra BTC process is available so the supporting control job can run immediately
6. What happens when chain is activated ?
When a chain gets activated It will be copied into active version The processes will be planned in batch as program RSPROCESS with type and variant given as parameters with job name BI_PROCESS_<TYPE> waiting for event, except the trigger The trigger is planned as specified in its variant, if start via meta-chain it is not planned to batch
7. Steps in process chains ?
Go to transaction code-> RSPC
Follow the Basic Flow of Process chain..
1. Start chain
2. Delete BasicCube indexes
3. Load data from the source system into the PSA
4. Load data from the PSA into the ODS object
5. Activate data in the ODS object
6. Load data from the ODS object in the BasicCube
7. Create indexes after loading for the BasicCube
Also check out theese links:
Help on "Remedy Tickets resolution"
production support issues
/people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
https://forums.sdn.sap.com/click.jspa?searchID=678788&messageID=1842076
Production Support
Production support issues
Business Intelligence Old Forum (Read Only Archive)
http://help.sap.com/saphelp_nw2004s/helpdata/en/8f/c08b3baaa59649e10000000a11402f/frameset.htm
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/8da0cd90-0201-0010-2d9a-abab69f10045
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/19683495-0501-0010-4381-b31db6ece1e9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/36693695-0501-0010-698a-a015c6aac9e1
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/9936e790-0201-0010-f185-89d0377639db
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3507aa90-0201-0010-6891-d7df8c4722f7
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/263de690-0201-0010-bc9f-b65b3e7ba11c
/people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
/people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
For common data load errors check this link:
/people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
Re: In production Support , how i can acquire the knowledge
Re: How to resolve tickets its urgent
Re: production support issues
production support
check it out
/thread/152949 [original link is broken]
production support issues
production support issues
Production Support Issues
/thread/153963 [original link is broken]
Issue log on SAP- BW Production support
issues in production support
Production support issues
/thread/155620 [original link is broken]
Production support issues
Production support issues
production errors
Re: HI,wht r de errors in Support in BW
Production Support
/message/3267132#3267132 [original link is broken]
Assign points if useful
Regards,
Hari Reddy -
Buyer changed Promised should not reflect in supplier portal.
Hi,
I am creating a PO with NBD to D30 and promised date (NBD - Transit time) to D25 and approve it.
Now this PO goes to supplier through iSupplier.
Supplier changes the date to D40 as new promised date.
This goes back to Buyer and Buyer changes the Promised date and reapprove it.
Now since FOB on buyer's side, he can decide the transit only before 1 week time of shipment. So the transit time may change 5 to 10 days or 20 days or remains 5 days only.
Based on that, Buyer will change the promised date 1 week shipment and reapprove the PO.
Now this change should not reflect in iSupplier. Because the pomised date may some time more than (D45) what supplier promised (D40).And this may affect their performance report (Supplier point of view)
Please suggest, Is there workaround?
Thanks,
JeyHi,
As Infocube is additive and ODS is over write, if you want single value and that is addition of both first and second then compress the data in the cube. it is not possible to have latest value from the cube. in ODS it is possible to have latest value.
Best regards,
Malli. -
Please help me in this scenario .
Hello Please help me in below scenario .
We are loading the data from Source system to one cube A as delta load .
then from cube to A to ODS B as delta load once again , All key figures are set as addivitive .
Then ODS B to cube C once again as delta .
Some how we found that , there are double records ( means double in net value ) in for week 01 to 07 .
The root cause was , there was initial load from cube A to ODS B and from ODS B to cube C. ( 2 weeks before )
And after than there were delta request for week 8 , 9 10 .
Now , we have done the selective deletion from Cube C for week 1 to week 7 .
But how can we repair the whole scenario once again , because in the next load , i think once again data will be updated in cube C , which is not correct.
All the request are in between , and delta has also done ..so how can i correct the things .
Note that , Cube C and ODS B are getting the data from many sources .
Thanks in advance.
Regards,
ManojEven if you did selective deletion from cube C and you still have the data additive in ODS, then the records to the cube is not going to be correct. Can you check if the ODS is also duplicated?, I am sure it is. I would recommend you do reinit in cube C, after making sure the data in the ods is correct, if it is not, then you need to do the same thing in selective deletion on ODS as well and bring data from cube A to ODS and Cube C.
thanks.
Wond -
Hi Experts,
I am a novice candidate pursuing for SAP BW/BI opportunities and appearing for different interviews in SAP BW/BI. Following are some of the questions that I was fired by the interviewers. I appreciate your time and consideration for answering my queries.
1)How do you initialize the setup tables for filling the data of just past three years ?
And after full load from set up tables and executing the delta, later there is a requirement of adding more application tables in the datasource of LO cockpit.How should we go about without getting duplicate records in the source system and retaining the original delta
2) what are the general different transformations issues and methods to solve it?
3) How to handle DTP issues like if number of records were 1000 in DSO object and the Infocube received just 900, bad characters problem,etc
4) RDA background process flow?
5) How to have plan/actual comparisons?lets say Profitability analysis.
Plan infocubes contains which data with respect to actual infocube? Can you explain with example?
6) In business content activation,how to exclude the objects that have been already activated.
7)Does change run affect all aggregates or only aggregates containing that master data which is undergoing change run is affected?
8) Can somebody send me sample Functional requirements design documents/blueprints/detal design docs.
9) what are functional support issues in SAP BI implementations?
10) The fastest and best method to improve query performance?
If I am right, is it Cache settings?
11) I am also preparing for certification exam BI 7.0 which is on march 7th? I need some sample questions.
12) Need more information to prepare for SAP BI functional analyst and developer interviews.
Thanks
Mujtaba.
Lot of points will be given for urgent replies.
Edited by: Nazeeruddin Mujtaba Mohammed on Feb 16, 2008 7:20 PMHi Jacky
6) In business content activation,how to exclude the objects that have been already activated.
Just you have to select the particular object ,context menu ,select Donot install below.
12) Need more information to prepare for SAP BI functional analyst and developer interviews.
Normally the production support activities include
Scheduling
R/3 Job Monitoring
B/W Job Monitoring
Taking corrective action for failed data loads.
Working on some tickets with small changes in reports or in AWB objects.
The activities in a typical Production Support would be as follows:
1. Data Loading - could be using process chains or manual loads.
2. Resolving urgent user issues - helpline activities
3. Modifying BW reports as per the need of the user.
4. Creating aggregates in Prod system
5. Regression testing when version/patch upgrade is done.
6. Creating adhoc hierarchies.
we can perform the daily activities in Production
1. Monitoring Data load failures thru RSMO
2. Monitoring Process Chains Daily/weekly/monthly
3. Perform Change run Hierarchy
4. Check Aggr's Rollup
To add to the above
1)check data targets are ready for reporting,
2) No failed or cancelled jobs in sm37 monitors and Bw Monitor.
3) All requests are loaded for day, monthly and yearly also.
4) Also to note down time taken for loading of critical info cubes which are used for reporting.
5) Is there any break in any schedules from your process chains.
As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
a) Loads can be failed due to the invalid characters
b) Can be because of the deadlock in the system
c) Can be because of previous load failure , if the load is dependant on other loads
d) Can be because of erroneous records
e) Can be because of RFC connections
These are some of the reasons for the load failures.
Why there is frequent load failures during extractions? and how to analyse them?
If these failures are related to Data, there might be data inconsistency in source system. Though we are handling properly in transfer rules. We can monitor these issues in T-code -> RSMO and PSA (failed records) and update.
If we are talking about whole extraction process, there might be issues of work process scheduling and IDoc transfer to target system from source system. These issues can be re-initiated by canceling that specific data load and ( usually by changing Request color from Yellow - > Red in RSMO). and restart the extraction.
What is the daily task we do in production support.How many times we will extract the data at what times.
It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in number of records and kind of transfer rules you have provided. If transfer rules have some kind of round about transfer rules and updates rules has calculations for customized key figures... long times are expected..
Usually You need to work on RSMO and see what records are failing.. and update from PSA.
What are some of the frequent failures and errors?
As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
a) Loads can be failed due to the invalid characters
b) Can be because of the deadlock in the system
c) Can be because of previous load failure , if the load is dependant on other loads
d) Can be because of erroneous records
e) Can be because of RFC connections
These are some of the reasons for the load failures.
for Rfc connections:
We use SM59 for creating RFC destinations
Some questions
1) RFC connection lost.
A) We can check out in the SM59 t-code
RFC Des
+ R/3 conn
CRD client (our r/3 client)
double click..test connection in menu
2) Invalid characters while loading.
A) Change them in the PSA & load them.
3) ALEREMOTE user is locked.
A) Ask your Basis team to release the user. It is mostly ALEREMOTE.
2) Password Changed
3) Number of incorrect attempts to login into ALEREMOTE.
4) USE SM12 t-code to find out are there any locks.
4) Lower case letters not allowed.
A) Uncheck the lower case letters check box under "general" tab in the info object.
5) While loading the data i am getting messeage that 'Record
the field mentioned in the errror message is not mapped to any infoboject in the transfer rule.
6) object locked.
A) It might be locked by some other process or a user. Also check for authorizations
7) "Non-updated Idocs found in Source System".
8) While loading master data, one of the datapackage has a red light error message:
Master data/text of characteristic ZCUSTSAL already deleted .
9) extraction job aborted in r3
A) It might have got cancelled due to running for more than the expected time, or may be cancelled by R/3 users if it is hampering the performance.
10) request couldnt be activated because there is another request in the psa with a smaller sid
A)
11) repeat of last delta not possible
12) datasource not replicated
A) Replicate the datasource from R/3 through source system in the AWB & assign it to the infosource and activate it again.
13) datasource/transfer structure not active.
A) Use the function module RS_TRANSTRU_ACTIVATE_ALL to activate it
14) ODS activation error.
A) ODS activation errors can occur mainly due to following reasons-
1.Invalid characters (# like characters)
2.Invalid data values for units/currencies etc
3.Invalid values for data types of char & key figures.
4.Error in generating SID values for some data.
15. conversio routine error
solution.check the data format in source
16.OBJECT CANOOT BE ACTIVATED.or error when activating object
check the consistency of the object.
17.no data found.(in query)
check the info provider wether data is there or not and delete unsucessful request.
18.error generating or activating update rules.
1. What are the extractor types?
Application Specific
o BW Content FI, HR, CO, SAP CRM, LO Cockpit
o Customer-Generated Extractors
LIS, FI-SL, CO-PA
Cross Application (Generic Extractors)
o DB View, InfoSet, Function Module
2. What are the steps involved in LO Extraction?
The steps are:
o RSA5 Select the DataSources
o LBWE Maintain DataSources and Activate Extract Structures
o LBWG Delete Setup Tables
o 0LI*BW Setup tables
o RSA3 Check extraction and the data in Setup tables
o LBWQ Check the extraction queue
o LBWF Log for LO Extract Structures
o RSA7 BW Delta Queue Monitor
3. How to create a connection with LIS InfoStructures?
LBW0 Connecting LIS InfoStructures to BW
4. What is the difference between ODS and InfoCube and MultiProvider?
ODS: Provides granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.
CUBE: Follows the star schema, we can only append data, ideal for primary reporting.
MultiProvider: Does not have physical data. It allows to access data from different InfoProviders (Cube, ODS, InfoObject). It is also preferred for reporting.
5. What are Start routines, Transfer routines and Update routines?
Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine.
Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.
6. What is the difference between start routine and update routine, when, how and why are they called?
Start routine can be used to access InfoPackage while update routines are used while updating the Data Targets.
7. What is the table that is used in start routines?
Always the table structure will be the structure of an ODS or InfoCube. For example if it is an ODS then active table structure will be the table.
8. Explain how you used Start routines in your project?
Start routines are used for mass processing of records. In start routine all the records of DataPackage is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.
9. What are Return Tables?
When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a Cost Center, using a return table we can get expense per employee.
10. How do start routine and return table synchronize with each other?
Return table is used to return the Value following the execution of start routine
11. What is the difference between V1, V2 and V3 updates?
V1 Update: It is a Synchronous update. Here the Statistics update is carried out at the same time as the document update (in the application tables).
V2 Update: It is an Asynchronous update. Statistics update and the Document update take place as different tasks.
o V1 & V2 dont need scheduling.
Serialized V3 Update: The V3 collective update must be scheduled as a job (via LBWE). Here, document data is collected in the order it was created and transferred into the BW as a batch job. The transfer sequence may not be the same as the order in which the data was created in all scenarios. V3 update only processes the update data that is successfully processed with the V2 update.
12. What is compression?
It is a process used to delete the Request IDs and this saves space.
13. What is Rollup?
This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.
14. What is table partitioning and what are the benefits of partitioning in an InfoCube?
It is the method of dividing a table which would enable a quick reference. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table portioning.
15. How many extra partitions are created and why?
Two partitions are created for date before the begin date and after the end date.
16. What are the options available in transfer rule?
InfoObject
Constant
Routine
Formula
17. How would you optimize the dimensions?
We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size.
18. What are Conversion Routines for units and currencies in the update rule?
Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos.
19. Can an InfoObject be an InfoProvider, how and why?
Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select Insert characteristic as data target. For example, we can make 0CUSTOMER as an InfoProvider and report on it.
20. What is Open Hub Service?
The Open Hub Service enables us to distribute data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the InfoSpoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.
21. How do you transform Open Hub Data?
Using BADI we can transform Open Hub Data according to the destination requirement.
22. What is ODS?
Operational DataSource is used for detailed storage of data. We can overwrite data in the ODS. The data is stored in transparent tables.
23. What are BW Statistics and what is its use?
They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management.
24. What are the steps to extract data from R/3?
Replicate DataSources
Assign InfoSources
Maintain Communication Structure and Transfer rules
Create and InfoPackage
Load Data
25. What are the delta options available when you load from flat file?
The 3 options for Delta Management with Flat Files:
o Full Upload
o New Status for Changed records (ODS Object only)
o Additive Delta (ODS Object & InfoCube)
SAP BW Interview Questions 2
1) What is process chain? How many types are there? How many we use in real time scenario? Can we define interdependent processes with tasks like data loading, cube compression, index maintenance, master data & ods activation in the best possible performance & data integrity.
2) What is data integrityand how can we achieve this?
3) What is index maintenance and what is the purpose to use this in real time?
4) When and why use infocube compression in real time?
5) What is mean by data modelling and what will the consultant do in data modelling?
6) How can enhance business content and what for purpose we enhance business content (becausing we can activate business content)
7) What is fine-tuning and how many types are there and what for purpose we done tuning in real time. tuning can only be done for infocube partitions and creating aggregates or any other?
8) What is mean by multiprovider and what purpose we use multiprovider?
9) What is scheduled and monitored data loads and for what purpose?
Ans # 1:
Process chains exists in Admin Work Bench. Using these we can automate ETTL processes. These allows BW guys to schedule all activities and monitor (T Code: RSPC).
PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in any given process chain. Is a procedure either with in the SAP or external to it with a start and end. This process runs in the background.
PROCESS CHAIN is set of such processes that are linked together in a chain. In other words each process is dependent on the previous process and dependencies are clearly defined in the process chain.
This is normally done in order to automate a job or task that has to execute more than one process in order to complete the job or task.
1. Check the Source System for that particular PC.
2. Select the request ID (it will be in Header Tab) of PC
3. Go to SM37 of Source System.
4. Double Click on the Job.
5. You will navigate to a screen
6. In that Click "Job Details" button
7. A small Pop-up Window comes
8. In the Pop-up screen, take a note of
a) Executing Server
b) WP Number/PID
9. Open a new SM37 (/OSM37) command
10. In the Click on "Application Servers" button
11. You can see different Application Servers.
11. Goto Executing server, and Double Click (Point 8 (a))
12. Goto PID (Point 8 (b))
13. On the left most you can see a check box
14. "Check" the check Box
15. On the Menu Bar.. You can see "Process"
16. In the "process" you have the Option "Cancel with Core"
17. Click on that option. * --
Ans # 2:
Data Integrity is about eliminating duplicate entries in the database and achieve normalization.
Ans # 4:
InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
This compression can be done through Process Chain and also manually.
Ans#3
Indexing is a process where the data is stored by indexing it. Eg: A phone book... When we write somebodys number we write it as Prasads number would be in "P" and Rajesh's number would be in "R"... The phone book process is indexing.. similarly the storing of data by creating indexes is called indexing.
Ans#5
Datamodeling is a process where you collect the facts..the attributes associated to facts.. navigation atributes etc.. and after you collect all these you need to decide which one you ill be using. This process of collection is done by interviewing the end users, the power users, the share holders etc.. it is generally done by the Team Lead, Project Manager or sometimes a Sr. Consultant (4-5 yrs of exp) So if you are new you dont have to worry about it....But do remember that it is a imp aspect of any datawarehousing soln.. so make sure that you have read datamodeling before attending any interview or even starting to work....
Ans#6
We can enhance the Business Content bby adding fields to it. Since BC is delivered by SAP Inc it may not contain all the infoobjects, infocubes etc that you want to use according to your company's data model... eg: you have a customer infocube(In BC) but your company uses a attribute for say..apt number... then instead of constructing the whole infocube you can add the above field to the existing BC infocube and get going...
Ans#7
Tuning is the most imp process in BW..Tuning is done the increase efficiency.... that means lowering time for loading data in cube.. lowering time for accessing a query.. lowering time for doing a drill down etc.. fine tuning=lowering time(for everything possible)...tuning can be done by many things not only by partitions and aggregates there are various things you can do... for eg: compression, etc..
Ans#8
Multiprovider can combine various infoproviders for reporting purposes.. like you can combine 4-5 infocubes or 2-3 infocubes and 2-3 ODS or IC, ODS and Master data.. etc.. you can refer to help.sap.com for more info...
Ans#9
Scheduled data load means you have scheduled the loading of data for some particular date and time you can do it in scheduler tab if infoobject... and monitored means you are monitoring that particular data load or some other loads by using transaction RSMON.
1.Procedure for repeat delta?
You need to make the request status to Red in monitor screen and then delete it from ODS/Cube. Then when you open infopackage again, system will prompt you for repeat delta.
also.....
Goto RSA7->F2->Update Mode--->Delta Repetation
Delta repeation is done based on type of upload you are carrying on.
1. if you are loading masterdata then most of the time you will change the QM status to red and then repeat the delta for the repeat of delta. the delta is allowed only if you make the changes.
and some times you need to do the RnD if the repeat of delta is not allowed even after the qm status id made to red. here you have to change the QM status to red.
If this is not the case, the source system and therefore also the extractor, have not yet received any information regarding the last delta and you must set the request to GREEN in the monitor using a QM action.
The system then requests a delta again since the last delta request has not yet occurred for the extractor.
Afterwards, you must reset the old request that you previously set to GREEN to RED since it was incorrect and it would otherwise be requested as a data target by an ODS.
Caution: If the termianted request was a REPEAT request itself, always set this to RED so that the system tries to carry out a repeat again.
To determine whether a delta or a repeat are to be requested, the system ONLY uses the status of the monitor.
It is irrelevant whether the request is updated in a data target somewhere.
When activating requests in an ODS, the system checks delta repeat requests for completeness and the correct sequence.
Each green delta/repeat request in the monitor that came from the same DataSource/source system combination must be updated in the ODS before activation, which means that in this case, you must set them back to RED in the monitor using a QM action when using the solution described above.
If the source of the data is a DataMart, it is not just the DELTARNR field that is relevant (in the roosprmsc table in the system in which the source DataMart is, which is usually your BW system since it is a Myself extraction in this case), rather the status of the request tabstrip control is relevant as well.
Therefore, after the last delta request has terminated, go to the administration of your data source and check whether the DataMart indicator is set for the request that you wanted to update last.
If this is NOT the case, you must NOT request a repeat since the system would also retransfer the data of the last delta but one.
This means, you must NOT start a delta InfoPackage which then would request a repeat because the monitor is still RED. For information about how to correct this problem, refer to the following section.
For more information about this, see also Note 873401.
Proceed as follows:
Delete the rest of this request from ALL updated data targets, set the terminated request to GREEN IN THE MONITOR and request a new DELTA.
Only if the DataMart indicator is set does the system carry out a repeat correctly and transfers only this data again.
This means, that only in this case can you leave the monitor status as it is and restart the delta InfoPackage. Then this creates a repeat request\
In addition, you can generally also reset the DATAMART indicator and then work using a delta request after you have set the incorrect request to GREEN in the monitor.
Simply start the delta InfoPackage after you have reset the DATAMART indicator AND after you have set the last request that was terminated to GREEN in the monitor.
After the delta request has been carried out successfully, remember to reset the old incorrect request to RED since otherwise the problems mentioned above will occur when you activate the data in a target ODS.
What is process chain and how you used it?
A) Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
B) In one of our scenario we wanted to upload wholesale price infoobject which will have wholesale price for all the material. Then we wanted to load transaction data. While loading transaction data to populate wholesale price, there was a look up in the update rule on this InfoObject masterdata table. This dependency of first uploading masterdata and then uploading transaction data was done through the process chain.
What is process chain and how you used it?
A) We have used process chains to automate the delta loading process. Once you are finished with your design and testing you can automate the processes listed in RSPC. I have a real time example in the attachment.
1. What is process chain and how you used it?
Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
2. What is transaction for creating Process Chains ?
RSPC .
3. Explain Colector Process ?
Collector processes are used to manage multiple predecessor
processes that feed into the same subsequent process. The collector
processes available for BW are:
AND :
All of the direct predecessor processes must raise an event in order for subsequent processes to be executed
OR :
A least one predecessor process must send an event The first predecessor process that sends an event triggers the subsequent process
Any additional predecessor processes that send an event will again trigger
subsequent process (Only if the chain is planned as periodic)
EXOR : Exclusive OR
Similar to regular OR, but there is only ONE execution of the successor
processes, even if several predecessor processes raise an event
4. What are application Process ?
Application processes represent BW activities that are typically
performed as part of BW operations.
Examples include:
Data load
Attribute/Hierarchy Change run
Aggregate rollup
Reporting Agent Settings
5. Tell some facts about process Chains
Process chains are transportable Button for writing to a change request when
maintaining a process chain in RSPC
Process chains available in the transport connection wizard (administrator workbench)
If a process dumps, it is treated in the same manner as a failed process
Graphical display of Process Chain Maintenance requires the 620 SAPGUI and SAP BW 3.0B Frontend GUI
A special control background job runs to facilitate the execution of the of the other batch jobs of the process chain
Note your BTC process distribution, and make sure that an extra BTC process is available so the supporting control job can run immediately
6. What happens when chain is activated ?
When a chain gets activated It will be copied into active version The processes will be planned in batch as program RSPROCESS with type and variant given as parameters with job name BI_PROCESS_<TYPE> waiting for event, except the trigger The trigger is planned as specified in its variant, if start via meta-chain it is not planned to batch
7. Steps in process chains ?
Go to transaction code-> RSPC
Follow the Basic Flow of Process chain..
1. Start chain
2. Delete BasicCube indexes
3. Load data from the source system into the PSA
4. Load data from the PSA into the ODS object
5. Activate data in the ODS object
6. Load data from the ODS object in the BasicCube
7. Create indexes after loading for the BasicCube
Regards,
Hari -
Changes should not reflect in report
Hello friends,
I had run a report in BEx and which gives two values per record. For ex : i have ctrl1 and the value is 50 and after some period of time the value got changed to 40. i am getting two records in the report one with the value 50 and other with 40. i need only the last changed value, not the previous one in the report. i used ODS as as a data store and reporting is done with the cube. Any suggestions ?
Thanks in advance.
A.MHi,
As Infocube is additive and ODS is over write, if you want single value and that is addition of both first and second then compress the data in the cube. it is not possible to have latest value from the cube. in ODS it is possible to have latest value.
Best regards,
Malli. -
Options on Transtru/DataSource tap page in Infosource maintenance!!
Hello BW Experts,
Are the options in the Transtru/DataSource tap page (in the infosource manitenance screen) i.e., Full Upload, New Status(ODS Only) and Additive data (ODS and InfoCube).. ALSO AVAILABLE when we do the Business Content Extraction eg: LO Extraction.
Are these options only applicable for Flat File Loading??
plz Clarify,
Regards,
Sapster.Hi Sapster,
Those options are available for only Flat file loading..
And as for the extractions you can do as below..
<u>T-Code: LBWE</u>
First we need to check which data source suits the client's
requirements in LBWE.
Check whether it is in Active version/Modified version. If it is
in M version go to RSA5 and select our data source and press on
Transfer. Then go to RSA6 and see whether the datsource is been
transferred.
If the datasource is already in active version then we need to
check whether the datsource is already extracting data in to BW.
If the datasource is extracting the data then we need to check the
data existence in Setup tables (use SE11 to check the setup tables
data. For every extract structure one and only one setup table is
generated whose technical name is Extract structure name+ setup, for
eg. If extract structure name is MC11VAOHDR then set up tables name
is MC11VAOHDRSETUP) and Extraction Queue (LBWQ) and Update tables
(SM13) and Delta Queue (RSA7). If data exists in any of these T-
codes we need to decide whether we need the data in BW or not. If we
need the extract it as we do in LO-Extraction below. I f we don't
need delete the data.
<u>The dataflow from R/3 into BW:</u>
We nee to generate the extract structure by selecting the fields
from the communication structure in LBWE.
Generate the datasource and select the selection fields,
cancellation fields, hide fields that we want.
Replicate it into BW. Then we need to attach Info source(Transfer
rules/communication structure) to the datasource. We got 3 methods
to attach the infosource..
1) Business content:: Business content automatically proposes the
transfer rules and communication structure, we don't have to do
anything manually.
2) Application proposal: Here too proposal is done but some objects
will be missing which we need to assign in the transfer rules.
3) Others: here we need to create the transfer structure, rules,
comm. Strct from the scratch.
Modeling like infocube, Infosurce attachment, .
Then activate the extract structure.
We need to fill the setup tables for the first time loading. In
filling the setup tables we can select between Full load and delta
initialization loads.
<u>Filling Set up table:</u>
T Code : SBIW
Settings for Application-Specific DataSources (PI)->Logistics--
>Managing Extract Structures>Initialization>Filling in the Setup
Table -->Application-Specific Setup of Statistical Data --> in that
Youcan perform the Setup (Example : SD-Sales Orders - Perform Setup)
and execute .. or else direct T Code : OLIBW ( means based on your
application like sales order/billing/ purchase etc) For setup
tables T-Code is OLI*BW where * equals to application number like 02
fro purchasing,08 for shipment ..
First we need to decide whether we want delta loads to be
performed in the future. If we want to do delta loads, then we need
to go for Delta Initialization process or else do full load.
When we perform setup tables extraction, since setup tables are
cluster tables, we can't see the data in the setup tables. So we go
for Extractor checker (RSA3) just to see the setup tables data
(full/delta initialization).
Then create infopackage and select Full or Delta initialization in
Update tab and schedule it.
Delete the setup tables data by using LBWG.
Now we need to do delta loads.
Delta load data flow differs with various delta update methods. We
got 3 delta update methods as u know.
If we select "Queued Delta"update method then the data moves to
Extraction queue(LBWQ). Then run Collective update to move the data
from LBWQ into Delta Queue (RSA7). Then schedule the data using the
infopackage by selecting Delta Load in the update tab.
If we select "Direct delta" then the delta data moves into RSA7
directly.
If we select "Unserialized V3" method then the data goes
into "Update tables(SM13)", then run Collective update to move the
data from SM13 into RSA7. Schedule the data using infopackage.
If we click on Maintenance, we can generate the extract structure.
If we click on 2LIS_02_CGR button, then we we can generate data
source
Inactive under Update column: we can make a extract structure as
active or inactive.
If we click on Job control button the we can maintain the
collective update parameters like, start time, and is it hourly once
or dialy once
If we click on Queued delta button under Update mode then we can
select among 3 delta update methods.
Only Full/Delta initialization data loads only moves into setup
tables. Delta load data doesn't move into setup tables.
**RSA3 contains only setup table's data
Only Delta update data moves into RSA7/LBWQ/SM13 no full/delta
initialization loads data.
Generic extrction..
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/84bf4d68-0601-0010-13b5-b062adbb3e33
Hope it helps,
Assign points if useful.
Regards,
Archna -
Hi,
The scenario we have is that we have backend design done in two ways.
Let me explain both the structures completely.
1.We have 2 layers.In the 1st layer we have 3 ODS with full update and then on top of it 3 Infocubes with delta update from the underlying ODS's.Then the data from the cubes in the lower layer goes to the 3 cubes in the upper layer.And then a multiprovider is built on these upper layer cubes on which reporting is done.
2. Here also we have 2 layers.In the lower layer we have 3 ODS with full update.then another layer with 3 ODS with delta update and then these 3 ODS's feed 3 cubes in the upper layer .rest all remain same.
in short,we have cubes with delta update in the first structure and ODS with delta update in the second structure.
Can some one please explain which of this is better and why?
Please reply.
Regards,
SuchitraHi,
As per your scenario we are using cube as the reporting layer using multiprovider in first case and ODS as the reporting layer in the second case.Now the diffrence in both cases can be categorized in 2 ways:
1.Architecture Wise
2.Reporting Wise.
Architecture Wise:
we will have the following diffrences
one major difference is the manner of data storage. In ODS, data is stored in flat tables. By flat we mean to say ordinary transparent table whereas in a CUBE, it composed of multiple tables arranged in a STAR SCHEMA joined by SIDs. The purpose is to do MULTI-DIMENSIONAL Reporting
Another difference is In ODS, you can update an existing record given the KEY. In CUBES, theres no such thing. It will accept duplicate records and during reporting, SUM the keyfigures up. Theres no EDIT previous record contents just ADD. With ODS, the procedure is UPDATE IF EXISTING (base from the Table Key) otherwise ADD RECORD.
Reporting Wise:
basically you use ods-objects to store the data on a document/item/schedule line level whereas in the cube you will have only more aggregated data (on material, customer ...). So you can do your reporting on the already aggregated data and if necessary do a detailed reporting on the ods object. Addionally ods objects will provide you a delta in case your datasource doesn't provide it. Just use overwrite mode for all (characteristics and keyfigures) in the update rules and the ods will take care about the rest
Infocubes are Multi dimensional objects that contains fact table and dimension table are available whereas ODS is not a multi dimensional object there are no fact tables and dimension tables. It consists of flat transparent tables.
In infocubes there are characteristics and keyfigures but in ods key fields and data fields. we can keep non key characteristics in data fields.
Some times we need detailed reports we can get through ODS. ODS are used to store data in a granular form i.e level of detail is more. The data in the infocube is in aggregated form.
from reporting point of view ods is used for operational reporting where as infocubes for multidimensional reporting.
ODS are used to merge data from one or more infosources but infocubes does not have that facility.
The default update type for an ODS object is overwrite for infocube it is addition. ODS are used to implement delta in BW. Data is loaded into the ODS object as new records or updating existing records in change log or overwrite existing records in active data table using 0record mode.
You cannot load data using Idoc transfer method in ODS but u can do in infocube.
you cannot create aggregate on ODS. you cannot create infosets on infocube.
ODS objects can be used
when u want to use the facility of overwrite.if u want to overwrite nonkey characteristics and key figures
If u want detailed reports u can use ODS.
If u want to merge data from two or more infosources you can use ODS.
IT allows u to drill down from infocube to ODS through RRI interface.
Moreover to conclude reporting performance wise cubes are better as compared to ODS.
As per your requirement you can have ur model with various advantages and disadvantages as mentioned above.
Note:all these information is availabe in various threads and if you would have checked thoroughly you would have got it.
With Regrads:
Prafulla Singh.
Edited by: prafulla singh on Mar 27, 2008 12:47 PM -
Addition and Overwrite update types for ODS
Hello BW Experts ,
I have an issue.
For a particular Order and Cost element on R/3 side I have Four cost Figures .
When I do full upload from the related standard data source to ODS with update type Overwritten I am getting only the last cost figure of the same order and Cost elemet.
I am loadind data from this ODS to infocube further.
Now the problem is I want the sum of all the cost figures for the same order and same cost element.
But as the last cost figure is overwritten in ODS I am not getting the correct sum.
So i have made changes to the update type from Overwrite to Addition.
Now I am getting the addition of all cost figures correctly .
Now I am doubtful that If I add further data to ODS by full upload then all cost figures will get doubled.
Please explain me what to do in such a case.
Thanks in Advance,
Amol .Hello Amol,
check if the datasource supports delta!! you can see it in the rsa6 datasource display. check for the delta checkbox. also from the roosource table this can be found out.
if yes, shift to init-delta loads from full loads.
only the init may take some time but the deltas on daily basis should not take much time.
also one more thing to add to the earlier responses is you can automate the deletion of similar request in infopakcage setting so that u need not manually delete the full upload request daily( if u are working on full uploads on daily basis).
hope it helps..
regards, -
Hello,
I needed to create an additional index on the ODS and I have done it via SE11 --> index. An index got created but it cannot be transported since it has a dev class $TMP. We were not able to find a way to change a dev class. I deleted that index and tried to create it via RSA1 --> InfoProvider --> Edit ODS. When I created an index, I tried to activate an ODS. The ODS cannot be activated now. The message is "Error adjusting the database".
What can I do to be able to activate that ODS? Thanks
Polina
203-849-6498
[email protected]Every developer should have SE14 access in the development system. If your company security personnel don't like to give that access, just walk to a friendly basis person and have him or her run the transaction.
Remember to request the SE14 authorization formally so that you will have it when you need it next time! -
Problems of Delta loadings with an ODS in addition
Hello friends,
I meet a problem. I'm going to try to explain it clearly
My system is a BI 7.0. I created a dataflow:
1. extractor - 0FI_GL_4
2. ODS - ZFI_GL_4: Update Rule (not transformation) in overwrite & delta mode
3.ODS - ZODSHCC1: Update Rule (not transformation) in addition & delta mode
4.Cube.... (from this step my problem already exists)
I meet problem with the feeding of the 2nd ODS (the 1st one is correctly feeded).
Because of a "stupid" generated abap line, the UR doesn't take into account the local currency (present into the data field screen not in the key data screen) of before-image records.
IF NOT g_s_is-recordmode = rsudt_c_updmode-before_image. " 'X'
PERFORM r0004_0LOC_CURRCY
CHANGING l_wa_new l_val_set c_t_idocstate c_subrc l_abort.
IF l_abort <> 0.
EXIT.
ELSEIF c_subrc <> 0.
skip this record and continue
c_subrc = 0.
REFRESH g_t_kb.
CLEAR g_s_kb.
CONTINUE.
ENDIF.
g_flg_rec = rs_c_true.
ENDIF.
Consequently after activation the lines impacted by the "Before-Image" records have the local currency field erased (blank) but ratio OK.
On the other hand, the "After-Image" records (always new lines in DSO=new key) are good (good amount, good currency...).
Thanks for your help.
Samuel
PS: don't hesitate to give me an email address; I could send you screenshot to understand easily the problem.Hello Patchov,
Yes I'm sure!!!! it's a generated UR abap code.
About you: "But if I guess, the code you call generated, is part of delivered code and it is meant for a UR with overwrite, not for addition."
The concerning field "local currency (0LOC_CURRCY)" is in overwrite; it isn't a ratio (& it isn't a key field) so I can't choose the addition option.
Samuel -
New fields addition to BW 3.5 version ODS and Cube and transport to PRD.
Hi,
We have a scenarion on 3.5 wherein there is a enhancement to ODS and Cube(few new fileds are added), this New ODS also feeds data to Cube. Since we do not had data on Quality system, we had no problem in adding fields to ODS and cube, but now we need transport these changes to Production, In production ODS and Cube has large data. we have few doubts.
1. Shall we need to delete data from ODS and Cube then Transport request to Production server.
2. Is it ok to move transport request without deleting data in ODS and Subsequent Cube in production system
Guys and Gals,
what is your suggestion on this one. WE are in BW 3.5 only. No BI7.
Please revert back.Hi
you can directly transport that to production.
the image will over write with the existing one and for the new object add , a new table space will be created.
it will not affect the Old data
But in the Cube even if the data is there there is a concept called remodeling
http://help.sap.com/saphelp_nw70/helpdata/en/58/85e5414f070640e10000000a1550b0/content.htm
hope this helps
santosh -
Addition of fields in ODS/Cube till Datasource
Hi All
Assuming, I hv a Datasource/copa Datasource replicated and connected till Cube/ODS with data in it,,and Now I am in a need to add extra fields in this pipeline, So plz guide me, on to add these extra fields starting from Datasources,Infosources,Info objects, Transformations(BI 7.0),TransferRules/Update Rules(BW 3.5) if any,,,and then finally Data targets.(ODS & CUBE)??
THANKSPlease search in SDN u will get lot of threads which are already discussed about the same....
Khaja -
Hello
Can anybody tell me how can i add a field to a ODS.
Regards
TarunAlso consider that the position where you add the new infoobject is important for the runtime of transport into another system, where the ODS object already contains (huge amounts) of data.
If you change the ODS structure in a development system and transport it to a production system, the underlying database will perform a simple extend if you add new infoobjects at the end. If the new infoobjects are inserted somewhere in the middle, the database will perform an export (to file), deletion in the table, insertion of fields, and import from file again. This has a significant impact on the runtime (factors...).
Best regards,
Steen
Maybe you are looking for
-
I have set <mime-mapping> <extension>xml</extension> <mime-type>text/xml</mime-type> </mime-mapping> in the file .\jakarta-tomcat-5.5.7\conf\web.xml. The aim is that the field 'contet-type="text/xml"' come inside the HTTP 200 OK reply along with the
-
hey i got a creative zen sleek for christmas today. I was playing the fm tuner and i accidently pushed the middle button and it went to radio recordings and all of the sudden froze. Could you please help me with it's?
-
Questions re: GPIB-ENET, LabView 5 and 7, license and Win XP
We have a full development version of LabView 5.0 on a Win 98 SE machine, and just acquired a GPIB-ENET box that came with Solaris drivers. I loaded the GPIB and VISA software on LabView 5.0 and tried to connect to test the setup to no avail. It appe
-
HT201398 Why won't podcast app work on iPhone 5 after ios 8.1.3 update?
Why won't podcast app work on iPhone 5 after ios 8.1.3 update? I've restarted the phone and itunes several times, have uninstalled and reinstalled the app on the phone and itunes separately, and it still just shows 'Installing" under the greyed out a
-
Additional components needed.
When I publish a quicktime movie using iWeb 09, (movie from iphoto 09 -originally from Canon SD870IS camera) I get a message saying "Additional Components Needed." I can hear sound but no video. Anyone have any ideas on how to rectify this problem?