Interview quiestions on implementation
Hi
sap experts, this is sravan i have got aninerview scheduled with one company
they asked me that if u will b selected, u have to work on implementation side
the problem is... i dont have much knowledge on implementaion plz .....
any one can tell me, what kind of qns will ask ? and give yr goodexplanation
if it is possible plz send me asap methodology documention...
plz its very urgent.....
u shall b rewarded with points...
sravan
http://www.sap-img.com/sap-sd.htm
<b>Interview Questions</b>
Important Tips for Interview for SAP SD
SAP SD Interview Questions
Interview Question and Answers on SAP SD
Some SAP SD Interview Questions 1
Some SAP SD Interview Questions 2
<b>Tables/Tcodes in SAP SD</b>
Important Tables for SAP SD
SAP SD Transaction codes List
Task Specifc SD Transaction Codes 1
Task Specifc SD Transaction Codes 2
<b>SD Frequently Asked Question</b>
Sales and Distribution FAQ
Link Between SAP SD, MM & FI
Why Do We Assign Division to Sales Organisation
Regards,
Rajesh Banka
Reward points if helpful.
Similar Messages
-
Interview Q in Implementation project ?
Hello Masters,
Iam having an interview with "SAP INDIA"
kindly tell me what kind of Q they ask in IMPLEMENTATION
PROJECT.
Here are my responsibilities mentioned in CV.
--> Extensive usage of Administration Workbench (RSA1).
-->Data modeling using BW Extended star schema concepts.
-->Maintenance of data sources at SAP OLTP source .
-->Data upload for both Master data and Transactional Data from the source system into the SAP BW.
--> Worked extensively on LO Cockpit. Loaded the data from R/3 to BW and used Initialization with delta update and full update methods.
-->Installation of relevant SAP business content as when required.
-->Data extraction into info cubes after creating transfer rules, info packages and update rules.
-->Installed Business Content Objects like Info Objects, Info Sources, Info Cubes, ODS, Info sets and info providers using administration workbench (RSA1).
-->Solving performance-related issues by applying Partitioning, Aggregates, and Compression.
-->Creation of reports in Reporting Environment using BEx Analyzer, Query Designer and organizing Workbooks in BEx Browser.
KINDLY LET ME KNOW THE ANSWERS ALSO
THANK YOU VERY MUCH
VIJAYHi Vijay,
Call me once .I will let you know.
+91-9819983246
Thanks -
I have interview in ibm , can any body tell queistion ask by ibm
hi all
iam sap fico consultantant i have interview in ibm , can any body tell queistion ask by ibm,if any way to getting interview quiestions please.......
thank u
chinnaHi,
Dude, I dont think any one will have questions asked by IBM as such. Try the links below which might help you in your interview. You can also search in this forum for more FAQ's. Get some real time issues from this forum
http://www.sap-img.com/financial/fi-faq.htm
http://www.erpgenie.com/sapfunc/fi.htm
http://www.sapprofessionals.org/?q=125_fi_questions_for_certification
https://www.sdn.sap.com/irj/sdn/wiki?path=/display/home/sap+solutions&
All the best
Regards
Genie -
Real time (like) projects
Hi everyone-
Where can I find some real time (or ALIKE) projects/exercises so that I can practice for my self on various topics like ETL, Modeling and Reporting.
My email ID: [email protected]
-ThanksHI Ranjit,
plzzzzz forward me some projects details....
last week interviewer asked about implementation issues.. wat type of problem u faced at de time of implementation ?
plzzzzz send me details...
it is very favorable 2 me
thanks
shiva -
Can you help me on these 6 SAP BW interview questions?
Hi,
Can you help me get a better understanding of the expectations in the following interview questions?
Please explain them in your on words and if you think there is an additional information, you may provide the link but I am more interested in your own words and the typical things to say:
1. What is your performance tuning experience (discussion is on SAP BW)?
2. What is your experience with SAP BW change control?
3. You will have to ensure data integrity, adherence to standards and process excellence. What is your experience?
4. What exactly is entailed in a full-lifecycle SAP BW project? When can one say that she has 3 full-lifecycle SAP BW project experience.
5. You will be responsible for working with business to develop business intelligence metrics and analytics. What is expected to be discussed on business intelligence metrics and analytics.?
6. You will be responsible for designing road-map for data warehouse implementation and growth. What is your experience?
ThanksAmanda,
All the questions posed have a rider to it mentioning 'Your Experience' - I am not very sure if we can tell you what your experience has been...
My 0.02
Arun -
Hi Friends,
I was face some Interview.
Please send answers to the questions?
How many data Fields and key fields we can create in DSO?
You can overwrite key fields or Data Fields?
Which up date we use in Delta queue extraction( v1 or v2 or v3)
Which message we get when transported request is failed?
what is the Structural difference between Infoucbe and DSO
Data Loading is taking huge time when we extract data from source system to BI system/ how to solve?(Before it took 3-4 Hours now data loading takes 4 days)What is the difference between Display Attribute and
Navigational Attribute? How to make display attribute and navigational
attribute?
How to load flat file data?
How to load Hierarchy file data?
What is HACR?
How to maintain HACR?
If any issue in HACR then how to resolve the issue?
What is Baby Cube?
Why we are creating Aggregates?
What is the use of Aggregates?
Is there
any particular field on that we can create Aggregates or we can maintain
Aggregate on any field?
What is
the different DSO available? And what is the difference between those DSO?
What is
replacement path?
What are
the extractor types?
• Application Specific
o BW Content FI, HR, CO, SAP CRM, LO Cockpit
o Customer-Generated Extractors
LIS, FI-SL, CO-PA
• Cross Application (Generic Extractors)
o DB View, InfoSet, Function Module
2. What are the steps involved in LO Extraction?
• The steps are:
o RSA5 Select the DataSources
o LBWE Maintain DataSources and Activate Extract Structures
o LBWG Delete Setup Tables
o 0LI*BW Setup tables
o RSA3 Check extraction and the data in Setup tables
o LBWQ Check the extraction queue
o LBWF Log for LO Extract Structures
o RSA7 BW Delta
Queue Monitor
3. How to create a connection with LIS InfoStructures?
• LBW0 Connecting LIS InfoStructures to BW
4. What is the difference between ODS and InfoCube and MultiProvider?
• ODS: Provides granular data, allows overwrite and data is in transparent
tables, ideal for drilldown and RRI.
• CUBE: Follows the star schema, we can only append data, ideal for primary
reporting.
• MultiProvider: Does not have physical data. It allows to access data from
different InfoProviders (Cube, ODS, InfoObject). It is also preferred for
reporting.
5. What are Start routines, Transfer routines and Update routines?
• Start Routines: The start routine is run for each DataPackage after the data
has been written to the PSA and before the transfer rules have been executed.
It allows complex computations for a key figure or a characteristic. It has no
return value. Its purpose is to execute preliminary calculations and to store
them in global DataStructures. This structure or table can be accessed in the
other routines. The entire DataPackage in the transfer structure format is used
as a parameter for the routine.
• Transfer / Update Routines: They are defined at the InfoObject level. It is
like the Start Routine. It is independent of the DataSource. We can use this to
define Global Data and Global Checks.
6. What is the difference between start routine and update routine, when, how
and why are they called?
• Start routine can be used to access InfoPackage while update routines are
used while updating the Data Targets.
7. What is the table that is used in start routines?
• Always the table structure will be the structure of an ODS or InfoCube. For
example if it is an ODS then active table structure will be the table.
8. Explain how you used Start routines in your project?
• Start routines are used for mass processing of records. In start routine all
the records of DataPackage is available for processing. So we can process all
these records together in start routine. In one of scenario, we wanted to apply
size % to the forecast data. For example if material M1 is forecasted to say
100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra
Large 20%), we wanted to have 4 records against one single record that is
coming in the info package. This is achieved in start routine.
9. What are Return Tables?
• When we want to return multiple records, instead of single value, we use the
return table in the Update Routine. Example: If we have total telephone expense
for a Cost Center, using a return table we can get
expense per employee.
10. How do start routine and return table synchronize with each other?
• Return table is used to return the Value following the execution of start
routine
11. What is the difference
between V1, V2 and V3 updates?
• V1 Update: It is a Synchronous update. Here the Statistics update is carried
out at the same time as the document update (in the application
tables).
• V2 Update: It is an Asynchronous update. Statistics update and the Document
update take place as different tasks.
o V1 & V2 don't need scheduling.
• Serialized V3 Update: The V3 collective update must be scheduled as a job
(via LBWE). Here, document data is collected in the order it was created and
transferred into the BW as a batch job. The transfer sequence may not be the
same as the order in which the data was created in all scenarios. V3 update
only processes the update data that is successfully processed with the V2
update.
12. What is compression?
• It is a process used to delete the Request IDs and this saves space.
13. What is Rollup?
• This is used to load new DataPackages (requests) into the InfoCube
aggregates. If we have not performed a rollup then the new InfoCube data will
not be available while reporting on the aggregate.
14. What is table partitioning and what are the benefits of partitioning in an
InfoCube?
• It is the method of dividing a table which would enable a quick reference.
SAP uses fact file partitioning to improve performance. We can partition only
at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as
data is stored in the relevant partitions. Also table maintenance becomes
easier. Oracle,
Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL
Server, IBM DB2/400 do not support table portioning.
15. How many extra partitions are created and why?
• Two partitions are created for date before the begin date and after the end
date.
16. What are the options available in transfer rule?
• InfoObject
• Constant
• Routine
• Formula
17. How would you optimize the dimensions?
• We should define as many dimensions as possible and we have to take care that
no single dimension crosses more than 20% of the fact table size.
18. What are Conversion Routines for units and currencies in the update rule?
• Using this option we can write ABAP
code for Units / Currencies conversion. If we enable this flag then unit of Key
Figure appears in the ABAP code as an additional parameter. For example, we can
convert units in Pounds to Kilos.
19. Can an InfoObject be an InfoProvider, how and why?
• Yes, when we want to report on Characteristics or Master Data. We have to
right click on the InfoArea and select "Insert characteristic as data
target". For example, we can make 0CUSTOMER as an InfoProvider and report
on it.
20. What is Open Hub Service?
• The Open Hub Service enables us to distribute data from an SAP BW system into
external Data Marts, analytical applications, and other applications. We can
ensure controlled distribution using several systems. The central object for
exporting data is the InfoSpoke. We can define the source and the target object
for the data. BW becomes a hub of an enterprise data warehouse.
The distribution of data becomes clear through central monitoring from the
distribution status in the BW system.
21. How do you transform Open
Hub Data?
• Using BADI we can transform Open Hub Data according to the destination
requirement.
22. What is ODS?
• Operational DataSource is used for detailed storage of data. We can overwrite
data in the ODS. The data is stored in transparent tables.
23. What are BW Statistics and what is its use?
• They are group of Business Content InfoCubes which are used to measure
performance for Query and Load Monitoring. It also shows the usage of
aggregates, OLAP and Warehouse management
http://www.ittestpapers.com/articles/713/3/SAP-BW-Interview-Questions---Part-A/Page3.html
Communication Structure and Transfer
rules
• Create and InfoPackage
• Load Data
25. What are the delta options available when you load from flat file?
• The 3 options for Delta Management with Flat Files:
o Full Upload
o New Status for Changed records (ODS Object only)
o Additive Delta (ODS Object & InfoCube)
Q) Under which menu path is the Test Workbench to be found, including in
earlier Releases?
The menu path is: Tools - ABAP Workbench - Test - Test Workbench.
Q) I want to delete a BEx query that is in Production system through request. Is
anyone aware about it?
A) Have you tried the RSZDELETE transaction?
Q) Errors while monitoring process chains.
A) During data loading. Apart from them, in process chains you add so many
process types, for example after loading data into Info Cube, you rollup data
into aggregates, now this rolling up of data into aggregates is a process type
which you keep after the process type for loading data into Cube. This rolling
up into aggregates might fail.
Another one is after you load data into ODS, you activate ODS data (another
process type) this might also fail.
Q) In Monitor----- Details (Header/Status/Details) à Under Processing (data
packet): Everything OK à Context menu of Data Package 1 (1 Records): Everything
OK ---- Simulate update. (Here we can debug update rules or transfer rules.)
SM50 à Program/Mode à Program à Debugging & debug this work process.
Q) PSA Cleansing.
A) You know how to edit PSA. I don't think you can delete single records. You
have to delete entire PSA data for a request.
Q) Can we make a datasource to support delta.
A) If this is a custom (user-defined) datasource you can make the datasource
delta enabled. While creating datasource from RSO2, after entering datasource
name and pressing create, in the next screen there is one button at the top,
which says generic delta. If you want more details about this there is a
chapter in Extraction book, it's in last pages u find out.
Generic delta services: -
Supports delta extraction for generic extractors according to:
Time stamp
Calendar day
Numeric pointer, such as document number & counter
Only one of these attributes can be set as a delta attribute.
Delta extraction is supported for all generic extractors, such as tables/views,
SAP Query and function modules
The delta queue (RSA7) allows you to monitor the current status of the delta
attribute
Q) Workbooks, as a general rule, should be transported with the
role.
Here are a couple of scenarios:
1. If both the workbook and its role have been previously transported, then the
role does not need to be part of the transport.
2. If the role exists in both dev and the target system but the workbook has
never been transported, and then you have a choice of transporting the role
(recommended) or just the workbook. If only the workbook is transported, then
an additional step will have to be taken after import: Locate the WorkbookID
via Table RSRWBINDEXT (in Dev and verify the same exists in the target system)
and proceed to manually add it to the role in the target system via Transaction
Code PFCG -- ALWAYS use control c/control v copy/paste for manually adding!
3. If the role does not exist in the target system you should transport both
the role and workbook. Keep in mind that a workbook is an object unto itself
and has no dependencies on other objects. Thus, you do not receive an error
message from the transport of 'just a workbook' -- even though it may not be
visible, it will exist (verified via Table RSRWBINDEXT).
Overall, as a general rule, you should transport roles with workbooks.
Q) How much time does it take to extract 1 million (10 lackhs) of records into
an infocube?
A. This depends, if you have complex coding in update rules it will take longer
time, or else it will take less than 30 minutes.
Q) What are the five ASAP Methodologies?
A: Project plan, Business Blue print, Realization, Final preparation & Go-Live - support.
1. Project Preparation: In this phase, decision makers define clear project
objectives and an efficient decision making process ( i.e. Discussions with the
client, like what are his needs and requirements etc.). Project managers
will be involved in this phase (I guess).
A Project Charter is issued and an implementation strategy is outlined in this
phase.
2. Business Blueprint: It is a detailed documentation of your company's
requirements. (i.e. what are the objects we need to develop are modified
depending on the client's requirements).
3. Realization: In this only, the implementation of the project takes place (development
of objects etc) and we are involved in the project from here only.
4. Final Preparation: Final preparation before going live i.e. testing,
conducting pre-go-live, end user training etc.
End user training is given that is in the client site you train them how to
work with the new environment, as they are new to the technology.
5. Go-Live & support: The project has gone live and it is into production.
The Project team will be supporting the end users.
Q) What is landscape of R/3 & what is landscape of BW. Landscape of R/3 not
sure.
Then Landscape of b/w: u have the development system, testing system, production system
Development system: All the implementation part is done in this sys. (I.e.,
Analysis of objects developing, modification etc) and from here the objects are
transported to the testing system, but before transporting an initial test
known as Unit testing
(testing of objects) is done in the development sys.
Testing/Quality system: quality check is done in this system and integration
testing is done.
Production system: All the extraction part takes place in this sys.
Q) How do you measure the size of infocube?
A: In no of records.
Q). Difference between infocube and ODS?
A: Infocube is structured as star schema (extended) where a fact table is
surrounded by different dim table that are linked with DIM'ids. And the data
wise, you will have aggregated data in the cubes. No overwrite functionality
ODS is a flat structure (flat table) with no star schema concept and which will
have granular data (detailed level). Overwrite functionality.
Flat file
datasources does not support 0recordmode in extraction.
x before, -after, n new, a add, d delete, r reverse
Q) Difference between display attributes and navigational attributes?
A: Display attribute is one, which is used only for display purpose in the
report. Where as navigational attribute is used for drilling down in the
report. We don't need to maintain Navigational attribute in the cube as a
characteristic (that is the advantage) to drill down.
Q. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
A: But how is it possible? If you load it manually twice, then you can delete
it by requestID.
Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
Sure you can. ODS is nothing but a table.
Q. CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE?
A) Yes of course. For example, for loading text and hierarchies we use
different data sources but the same InfoSource.
Q. BRIEF THE DATAFLOW IN BW.
A) Data flows from transactional system to analytical system (BW). DataSources
on the transactional system needs to be replicated on BW side and attached to
infosource and update rules respectively.
Q. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER
RULES?
Q) WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
FULL and DELTA.
Q) AS WE USE Sbwnn, sbiw1, sbiw2 for delta update in LIS THEN
WHAT IS THE PROCEDURE IN LO-COCKPIT?
No LIS in LO cockpit. We will have datasources and can be maintained (append
fields). Refer white paper
on LO-Cockpit extractions.
Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?
A) Initially we don't delete the setup tables but when we do change in extract
structure we go for it. We r changing the extract structure right, that means
there are some newly added fields in that which r not before. So to get the
required data ( i.e.; the data which is required is taken and to avoid
redundancy) we delete n then fill the setup tables.
To refresh the statistical data.
The extraction set up reads the dataset that you want to process such as,
customers orders with the tables like VBAK, VBAP) & fills the relevant communication
structure with the data. The data is stored in cluster
tables from where it is read when the initialization is run. It is important
that during initialization phase, no one generates or modifies application
data, at least until the tables can be set up.
Q) SIGNIFICANCE of ODS?
It holds granular data (detailed level).
Q) WHERE THE PSA DATA IS STORED?
In PSA table.
Q) WHAT IS DATA SIZE?
The volume of data one data target holds (in no. of records)
Q) Different types of INFOCUBES.
Basic, Virtual (remote, sap remote and multi)
Virtual Cube is used for example, if you consider railways reservation all the
information has to be updated online. For designing the Virtual cube you have
to write the function module that is linking to table, Virtual cube it is like
a the structure, when ever the table is updated the virtual cube will fetch the
data from table and display report Online... FYI.. you will get the information
: https://www.sdn.sap.com/sdn
/index.sdn and search for Designing Virtual Cube and you will get
a good material designing the Function Module
Q) INFOSET QUERY.
Can be made of ODS's and Characteristic InfoObjects with masterdata.
Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
In R/3 or in BW? 2 in R/3 and 2 in BW
Q) ROUTINES?
Exist in the InfoObject, transfer routines, update routines and start routine
Q) BRIEF SOME STRUCTURES USED IN BEX.
Rows and Columns, you can create structures.
Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes &
Characteristic values.
Variable Types are
Manual entry /default value
Replacement path
SAP exit
Customer exit
Authorization
Q) HOW MANY LEVELS YOU CAN GO IN REPORTING?
You can drill down to any level by using Navigational attributes and jump
targets.
Q) WHAT ARE INDEXES?
Indexes are data base indexes, which help in retrieving data fastly.
Q) DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.
Help! Refer documentation
Q) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED?
No.
Q) WHAT IS THE SIGNIFICANCE OF KPI'S?
KPI's indicate the performance of a company. These are key figures
Q) AFTER THE DATA EXTRACTION
WHAT IS THE IMAGE POSITION.
After image (correct me if I am wrong)
Q) REPORTING AND RESTRICTIONS.
Help! Refer documentation.
Q) TOOLS USED FOR PERFORMANCE TUNING.
ST22, Number ranges, delete indexes before load. Etc
Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA DAILY.
There should be some tool to run the job daily (SM37 jobs)
Q) AUTHORIZATIONS.
Profile generator
Q) WEB REPORTING.
What are you expecting??
Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.
Of course
Q) PROCEDURES OF REPORTING ON MULTICUBES
Refer help. What are you expecting? MultiCube works on Union condition
Q) EXPLAIN TRANPSORTATION OF OBJECTS?
Dev---àQ and Dev-------àP
Q) What types of partitioning are there for BW?
There are two Partitioning Performance aspects for BW (Cube & PSA)
Query Data Retrieval
Performance Improvement:
Partitioning by (say) Date Range improves data retrieval by making best use of
database [data range] execution plans and indexes (of say Oracle database engine).
B) Transactional Load Partitioning Improvement:
Partitioning based on expected load volumes and data element sizes. Improves
data loading into PSA and Cubes by infopackages (Eg. without timeouts).
Q) How can I compare data in R/3 with data in a BW Cube after the daily delta
loads? Are there any standard procedures for checking them or matching the
number of records?
A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the
number of records extracted. Then go to BW Monitor to check the number of
records in the PSA and check to see if it is the same & also in the monitor
header tab.
A) RSA3 is a simple extractor checker program that allows you to rule out
extracts problems in R/3. It is simple to use, but only really tells you if the
extractor works. Since records that get updated into Cubes/ODS structures are
controlled by Update Rules, you will not be able to determine what is in the
Cube compared to what is in the R/3 environment. You will need to compare
records on a 1:1 basis against records in R/3 transactions for the functional
area in question. I would recommend enlisting the help of the end user community
to assist since they presumably know the data.
To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute
and you will see the record count, you can also go to display that data. You
are not modifying anything so what you do in RSA3 has no effect on data quality
afterwards. However, it will not tell you how many records should be expected
in BW for a given load. You have that information in the monitor RSMO during
and after data loads. From RSMO for a given load you can determine how many
records were passed through the transfer rules from R/3, how many targets were
updated, and how many records passed through the Update Rules. It also gives
you error messages from the PSA.
Q) Types of Transfer Rules?
A) Field to Field mapping, Constant, Variable & routine.
Q) Types of Update Rules?
A) (Check box), Return table
Q) Transfer Routine?
A) Routines, which we write in, transfer rules.
Q) Update Routine?
A) Routines, which we write in Update rules
Q) What is the difference between writing a routine in transfer rules and
writing a routine in update rules?
A) If you are using the same InfoSource to update data in more than one data
target its better u write in transfer rules because u can assign one InfoSource
to more than one data target & and what ever logic u write in update rules
it is specific to particular one data target.
Q) Routine with Return Table.
A) Update rules generally only have one return value. However, you can create a
routine in the tab strip key figure calculation, by choosing checkbox Return
table. The corresponding key figure routine then no longer has a return value,
but a return table. You can then generate as many key figure values, as you
like from one data record.
Q) Start routines?
A) Start routines u can write in both updates rules and transfer rules, suppose
you want to restrict (delete) some records based on conditions before getting
loaded into data targets, then you can specify this in update rules-start
routine.
Ex: - Delete Data_Package ani ante it will delete a record based on the
condition
Q) X & Y Tables?
X-table = A table to link material SIDs with SIDs for time-independent
navigation attributes.
Y-table = A table to link material SIDs with SIDS for time-dependent navigation
attributes.
There are four types of sid tables
X time independent navigational attributes sid tables
Y time dependent navigational attributes sid tables
H hierarchy sid tables
I hierarchy structure sid tables
Q) Filters & Restricted Key figures (real time example)
Restricted KF's u can have for an SD cube: billed quantity, billing value, no:
of billing documents as RKF's.
Q) Line-Item Dimension (give me an real time example)
Line-Item Dimension: Invoice no: or Doc no: is a real time example
Q) What does the number in the 'Total' column in Transaction RSA7 mean?
A) The 'Total' column displays the number of LUWs that were written in the
delta queue and that have not yet been confirmed. The number includes the LUWs
of the last delta request (for repetition of a delta request) and the LUWs for
the next delta request. A LUW only disappears from the RSA7 display when it has
been transferred to the BW System and a new delta request has been received
from the BW System.
Q) How to know in which table (SAP BW) contains Technical Name / Description
and creation data of a particular Reports. Reports that are created using BEx
Analyzer.
A) There is no such table in BW if you want to know such details while you are
opening a particular query press properties button you will come to know all
the details that you wanted.
You will find your information about technical names and description about
queries in the following tables. Directory of all reports (Table RSRREPDIR) and
Directory of the reporting component elements (Table RSZELTDIR) for workbooks
and the connections to queries check Where- used list for reports in workbooks
(Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table
RSRWBINDEXT)
Q) What is a LUW in the delta queue?
A) A LUW from the point of view of the delta queue can be an individual
document, a group of documents from a collective run or a whole data packet of
an application
extractor.
Q) Why does the number in the 'Total' column in the overview screen of
Transaction RSA7 differ from the number of data records that is displayed when
you call the detail view?
A) The number on the overview screen corresponds to the total of LUWs (see also
first question) that were written to the qRFC queue and that have not yet been
confirmed. The detail screen displays the records contained in the LUWs. Both,
the records belonging to the previous delta request and the records that do not
meet the selection conditions of the preceding delta init requests are filtered
out. Thus, only the records that are ready for the next delta request are
displayed on the detail screen. In the detail screen of Transaction RSA7, a
possibly existing customer exit is not taken into account.
Q) Why does Transaction RSA7 still display LUWs on the overview screen after
successful delta loading?
A) Only when a new delta has been requested does the source system learn that
the previous delta was successfully loaded to the BW System. Then, the LUWs of
the previous delta may be confirmed (and also deleted). In the meantime, the
LUWs must be kept for a possible delta request repetition. In particular, the
number on the overview screen does not change when the first delta was loaded
to the BW System.
Q) Why are selections not taken into account when the delta queue is filled?
A) Filtering according to selections takes place when the system reads from the
delta queue. This is necessary for reasons of performance.
Q) Why is there a DataSource with '0' records in RSA7 if delta exists and has
also been loaded successfully?
It is most likely that this is a DataSource that does not send delta data to
the BW System via the delta queue but directly via the extractor (delta for
master data using ALE change pointers). Such a DataSource should not be
displayed in RSA7. This error is corrected with BW 2.0B Support Package 11.
Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the
loading procedure from the delta queue?
A) The impact is limited. If performance problems are related to the loading
process from the delta queue, then refer to the application-specific notes (for
example in the CO-PA area, in the logistics cockpit area and so on).
Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as
effective for the delta queue as for a full update. Please note, however, that
LUWs are not split during data loading for consistency reasons. This means that
when very large LUWs are written to the DeltaQueue, the actual package size may
differ considerably from the MAXSIZE and MAXLINES parameters.
Q) Why does it take so long to display the data in the delta queue (for example
approximately 2 hours)?
A) With Plug In 2001.1 the display was changed: the user has the option of
defining the amount of data to be displayed, to restrict it, to selectively
choose the number of a data record, to make a distinction between the 'actual'
delta data and the data intended for repetition and so on.
Q) What is the purpose of function 'Delete data and meta data in a queue' in
RSA7? What exactly is deleted?
A) You should act with extreme caution when you use the deletion function in
the delta queue. It is comparable to deleting an InitDelta in the BW System and
should preferably be executed there. You do not only delete all data of this
DataSource for the affected BW System, but also lose the entire information
concerning the delta initialization. Then you can only request new deltas after
another delta initialization.
When you delete the data, the LUWs kept in the qRFC queue for the corresponding
target system are confirmed. Physical deletion only takes place in the qRFC
outbound queue if there are no more references to the LUWs.
The deletion function is for example intended for a case where the BW System,
from which the delta initialization was originally executed, no longer exists
or can no longer be accessed.
Q) Why does it take so long to delete from the delta queue (for example half a
day)?
A) Import PlugIn 2000.2 patch 3. With this patch the performance during
deletion is considerably improved.
Q) Why is the delta queue not updated when you start the V3 update in the
logistics cockpit area?
A) It is most likely that a delta initialization had not yet run or that the
delta initialization was not successful. A successful delta initialization (the
corresponding request must have QM status 'green' in the BW System) is a
prerequisite for the application data being written in the delta queue.
Q) What is the relationship between RSA7 and the qRFC monitor (Transaction
SMQ1)?
A) The qRFC monitor basically displays the same data as RSA7. The internal
queue name must be used for selection on the initial screen of the qRFC
monitor. This is made up of the prefix 'BW, the client and the short name of
the DataSource. For DataSources whose name are 19 characters long or shorter,
the short name corresponds to the name of the DataSource. For DataSources whose
name is longer than 19 characters (for delta-capable DataSources only possible
as of PlugIn 2001.1) the short name is assigned in table ROOSSHORTN.
In the qRFC monitor you cannot distinguish between repeatable and new LUWs.
Moreover, the data of a LUW is displayed in an unstructured manner there.
Q) Why are the data in the delta queue although the V3 update was not started?
A) Data was posted in background. Then, the records are updated directly in the
delta queue (RSA7). This happens in particular during automatic goods receipt
posting (MRRS). There is no duplicate transfer of records to the BW system. See
Note 417189.
Q) Why does button 'Repeatable' on the RSA7 data details screen not only show
data loaded into BW during the last delta but also data that were newly added,
i.e. 'pure' delta records?
A) Was programmed in a way that the request in repeat mode fetches both
actually repeatable (old) data and new data from the source system.
Q) I loaded several delta inits with various selections. For which one is the
delta loaded?
A) For delta, all selections made via delta inits are summed up. This means, a
delta for the 'total' of all delta initializations is loaded.
Q) How many selections for delta inits are possible in the system?
A) With simple selections (intervals without complicated join conditions or
single values), you can make up to about 100 delta inits. It should not be
more.
With complicated selection conditions, it should be only up to 10-20 delta
inits.
Reason: With many selection conditions that are joined in a complicated way,
too many 'where' lines are generated in the generated ABAP
source code that may exceed the memory limit.
Q) I intend to copy the source system, i.e. make a client copy. What will
happen with may delta? Should I initialize again after that?
A) Before you copy a source client or source system, make sure that your deltas
have been fetched from the DeltaQueue into BW and that no delta is pending.
After the client copy, an inconsistency might occur between BW delta tables and
the OLTP delta tables as described in Note 405943. After the client copy, Table
ROOSPRMSC will probably be empty in the OLTP since this table is
client-independent. After the system copy, the table will contain the entries
with the old logical system name that are no longer useful for further delta
loading from the new logical system. The delta must be initialized in any case
since delta depends on both the BW system and the source system. Even if no
dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage, you
should expect that the delta have to be initialized after the copy.
Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of
processes?
A) Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW
queues only after informing the BW Support or only if this is explicitly
requested in a note for component 'BC-BW' or 'BW-WHM-SAPI'.
Q) Despite of the delta request being started after completion of the
collective run (V3 update), it does not contain all documents. Only another
delta request loads the missing documents into BW. What is the cause for this
"splitting"?
A) The collective run submits the open V2 documents for processing to the task
handler, which processes them in one or several parallel update processes in an
asynchronous way. For this reason, plan a sufficiently large "safety time
window" between the end of the collective run in the source system and the
start of the delta request in BW. An alternative solution where this problem
does not occur is described in Note 505700.
Q) Despite my deleting the delta init, LUWs are still written into the
DeltaQueue?
A) In general, delta initializations and deletions of delta inits should always
be carried out at a time when no posting takes place. Otherwise, buffer
problems may occur: If a user started the internal mode at a time when the
delta initialization was still active, he/she posts data into the queue even
though the initialization had been deleted in the meantime. This is the case in
your system.
Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT, some
entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What
do these statuses mean? Which values in the field 'Status' mean what and which
values are correct and which are alarming? Are the statuses BW-specific or
generally valid in qRFC?
A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read
once either in a delta request or in a repetition of the delta request.
However, this does not mean that the record has successfully reached the BW
yet. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that
the record has been written into the DeltaQueue and will be loaded into the BW
with the next delta request or a repetition of a delta. In any case only the
statuses READ, READY and RECORDED in both tables are considered to be valid.
The status EXECUTED in TRFCQOUT can occur temporarily. It is set before
starting a DeltaExtraction for all records with status READ present at that
time. The records with status EXECUTED are usually deleted from the queue in
packages within a delta request directly after setting the status before
extracting a new delta. If you see such records, it means that either a process
which is confirming and deleting records which have been loaded into the BW is
successfully running at the moment, or, if the records remain in the table for
a longer period of time with status EXECUTED, it is likely that there are
problems with deleting the records which have already been successfully been
loaded into the BW. In this state, no more deltas are loaded into the BW. Every
other status is an indicator for an error or an inconsistency. NOSEND in SMQ1
means nothing (see note 378903).
The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting.
Q) The extract structure was changed when the DeltaQueue was empty. Afterwards
new delta records were written to the DeltaQueue. When loading the delta into
the PSA, it shows that some fields were moved. The same result occurs when the
contents of the DeltaQueue are listed via the detail display. Why are the data
displayed differently? What can be done?
Make sure that the change of the extract structure is also reflected in the
database and that all servers are synchronized. We recommend to reset the
buffers using Transaction $SYNC. If the extract structure change is not
communicated synchronously to the server where delta records are being created,
the records are written with the old structure until the new structure has been
generated. This may have disastrous consequences for the delta.
When the problem occurs, the delta needs to be re-initialized.
Q) How and where can I control whether a repeat delta is requested?
A) Via the status of the last delta in the BW Request Monitor. If the request
is RED, the next load will be of type 'Repeat'. If you need to repeat the last
load for certain reasons, set the request in the monitor to red manually. For
the contents of the repeat see Question 14. Delta requests set to red despite
of data being already updated lead to duplicate records in a subsequent repeat,
if they have not been deleted from the data targets concerned before.
Q) As of PI 2003.1, the Logistic Cockpit offers various types of update
methods. Which update method is recommended in logistics? According to which
criteria should the decision be made? How can I choose an update method in
logistics?
See the recommendation in Note 505700.
Q) Are there particular recommendations regarding the data volume the
DeltaQueue may grow to without facing the danger of a read failure due to
memory problems?
A) There is no strict limit (except for the restricted number range of the
24-digit QCOUNT counter in the LUW management table - which is of no practical
importance, however - or the restrictions regarding the volume and number of
records in a database table).
When estimating "smooth" limits, both the number of LUWs is important
and the average data volume per LUW. As a rule, we recommend to bundle data
(usually documents) already when writing to the DeltaQueue to keep number of
LUWs small (partly this can be set in the applications, e.g. in the Logistics
Cockpit). The data volume of a single LUW should not be considerably larger
than 10% of the memory available to the work process for data extraction
(in a 32-bit architecture with a memory volume of about 1GByte per work
process, 100 Mbytes per LUW should not be exceeded). That limit is of rather
small practical importance as well since a comparable limit already applies
when writing to the DeltaQueue. If the limit is observed, correct reading is
guaranteed in most cases.
If the number of LUWs cannot be reduced by bundling application transactions,
you should at least make sure that the data are fetched from all connected BWs
as quickly as possible. But for other, BW-specific, reasons, the frequency
should not be higher than one DeltaRequest per hour.
To avoid memory problems, a program-internal limit ensures that never more than
1 million LUWs are read and fetched from the database per DeltaRequest. If this
limit is reached within a request, the DeltaQueue must be emptied by several
successive DeltaRequests. We recommend, however, to try not to reach that limit
but trigger the fetching of data from the connected BWs already when the number
of LUWs reaches a 5-digit value.
Q) I would like to display the date the data was uploaded on the
report. Usually, we load the transactional data nightly. Is there any easy way
to include this information on the report for users? So that they know the
validity of the report.
A) If I understand your requirement correctly, you want to display the date on
which data was loaded into the data target from which the report is being
executed. If it is so, configure your workbook to display the text elements in
the report. This displays the relevance of data field, which is the date on which
the data load has taken place.
Q) Can we filter the fields at Transfer Structure?
Q) Can we load data directly into infoobject with out extraction is it
possible.
Yes. We can copy from other infoobject if it is same. We load data from PSA if
it is already in PSA.
Q) HOW MANY DAYS CAN WE KEEP THE DATA IN PSA, IF WE R SHEDULED DAILY, WEEKLY
AND MONTHLY.
a) We can set the time.
Q) HOW CAN U GET THE DATA FROM CLIENT IF U R WORKING ON OFFSHORE PROJECTS.
THROUGH WHICH NETWORK.
a) VPN…………….Virtual
Private Network, VPN is nothing but one sort of network
where we can connect to the client systems sitting in offshore through RAS
(Remote access server).
Q) HOW CAN U ANALIZE THE PROJECT AT FIRST?
Prepare Project Plan and Environment
Define Project Management
Standards and
Procedures
Define Implementation Standards and Procedures
Testing & Go-live + supporting.
Q) THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL CUBES IF ONE
CUBE GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR?
Go to TCode sm66 then see which one is locked select that pid from there and
goto sm12
TCode then unlock it this is happened when lock errors are occurred when u
scheduled.
Q) Can anybody tell me how to add a navigational attribute in the BEx report in
the rows?
A) Expand dimension under left side panel (that is infocube panel) select than
navigational attributes drag and drop under rows panel.
Q) IF ANY TRASACTION CODE LIKE SMPT OR STMT.
In current systems (BW 3.0B and R/3 4.6B) these Tcodes don't exist!
Q) WHAT IS TRANSACTIONAL CUBE?
A) Transactional InfoCubes differ from standard InfoCubes in that the former
have an improved write access performance level. Standard InfoCubes are
technically optimized for read-only access and for a comparatively small number
of simultaneous accesses. Instead, the transactional InfoCube was developed to
meet the demands of SAP Strategic Enterprise Management (SEM), meaning that,
data is written to the InfoCube (possibly by several users at the same time)
and re-read as soon as possible. Standard Basic cubes are not suitable for
this.
Q) Is there any way to delete cube contents within update rules from an ODS
data source? The reason for this would be to delete (or zero out) a cube record
in an "Open Order" cube if the open order quantity was 0.
I've tried using the 0recordmode but that doesn't work. Also, would it
be easier to write a program that would be run after the load and delete
the records with a zero open qty?
A) START routine for update rules u can write ABAP code.
A) Yap, you can do it. Create a start routine in Update rule.
It is not "Deleting cube contents with update rules" It is only
possible to avoid that some content is updated into the InfoCube using the
start routine. Loop at all the records and delete the record that has the
condition. "If the open order quantity was 0" You have to think also
in before and after images in case of a delta upload. In that case you may
delete the change record and keep the old and after the change the wrong
information.
Q) I am not able to access a node in hierarchy directly using variables for
reports. When I am using Tcode RSZV it is giving a message that it doesn't
exist in BW 3.0 and it is embedded in BEx. Can any one tell me the other
options to get the same functionality in BEx?
A) Tcode RSZV is used in the earlier version of 3.0B only. From 3.0B onwards,
it's possible in the Query Designer (BEx) itself. Just right click on the
InfoObject for which you want to use as variables and precede further selecting
variable type and proce -
Hi,
Here are some BW interview questions. Make sure you have prepared for all the q's before going for an interview.
1) Please describe your experience with BEx (Business Explorer)
A) Rate your level of experience with BEx and the rationale for youre self-rating
B) How many queries have you developed? :
C) How many reports have you written?
D) How many workbooks have you developed?
E) Experience with jump targets (OLTP, use jump target)
F) Describe experience with BW-compatible ETL tools (e.g. Ascential)
2) Describe your experience with 3rd party report tools (Crystal Decisions, Business Objects a plus)
3) Describe your experience with the design and implementation of standard & custom InfoCubes.
1. How many InfoCubes have you implemented from start to end by yourself (not with a team)?
2. Of these Cubes, how many characteristics (including attributes) did the largest one have.
3. How much customization was done on the InfoCubes have you implemented?
4) Describe your experience with requirements definition/gathering.
5) What experience have you had creating Functional and Technical specifications?
6) Describe any testing experience you have:
7) Describe your experience with BW extractors
1. How many standard BW extractors have you implemented?
2. How many custom BW extractors have you implemented?
8) Describe how you have used Excel as a compliment to BEx
A) Describe your level of expertise and the rationale for your self-rating (experience with macros, pivot tables and formatting)
B)
9) Describe experience with ABAP
10) Describe any hands on experience with ASAP Methodology.
11) Identify SAP functional areas (SEM, CRM, etc.) you have experience in. Describe that experience.
12) What is partitioning and what are the benefits of partitioning in an InfoCube?
A) Partitioning is the method of dividing a table (either column wise or row wise) based on the fields available which would enable a quick reference for the intended values of the fields in the table. By partitioning an infocube, the reporting performance is enhanced because it is easier to search in smaller tables. Also table maintenance becomes easier.
13) What does Rollup do?
A) Rollup creates aggregates in an infocube whenever new data is loaded.
14) What are the inputs for an infoset?
A) The inputs for an infoset are ODS objects and InfoObjects (with master data or text).
15) What internally happens when BW objects like Info Object, Info Cube or ODS are created and activated?
A) When an InfoObject, InfoCube or ODS object is created, BW maintains a saved version of that object but does not make it available for use. Once the object is activated, BW creates an active version that is available for use.
16) What is the maximum number of key fields that you can have in an ODS object?
A) 16.
17) What is the specific advantage of LO extraction over LIS extraction?
A) The load performance of LO extraction is better than that of LIS. In LIS two tables are used for delta management that is cumbersome. In LO only one delta queue is used for delta management.
18) What is the importance of 0REQUID?
A) It is the InfoObject for Request id. OREQUID enables BW to distinguish between different data records.
19) Can you add programs in the scheduler?
A) Yes. Through event handling.
20) What is the importance of the table ROIDOCPRMS?
A) It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data, data packet size, maximum number of lines in a data packet, etc. The data packet size can be changed through the control parameters option on SBIW i.e., the contents of this table can be changed.
21) What is the importance of 'start routine' in update rules?
A) A Start routine is a user exit that can be executed before the update rule starts to allow more complex computations for a key figure or a characteristic. The start routine has no return value. Its purpose is to execute preliminary calculations and to store them in a global data structure. You can access this structure or table in the other routines.
22) When is IDOC data transfer used?
A) IDOCs are used for communication between logical systems like SAP R/3, R/2 and non-SAP systems using ALE and for communication between an SAP R/3 system and a non-SAP system. In BW, an IDOC is a data container for data exchange between SAP systems or between SAP systems and external systems based on an EDI interface. IDOCs support limited file size of 1000 bytes. So IDOCs are not used when loading data into PSA since data there is more detailed. It is used when the file size is lesser than 1000 bytes.
23) What is partitioning characteristic in CO-PA used for?
A) For easier parallel search and load of data.
24) What is the advantage of BW reporting on CO-PA data compared with directly running the queries on CO-PA?
A) BW has a better performance advantage over reporting in R/3. For a huge amount of data, the R/3 reporting tool is at a serious disadvantage because R/3 is modeled as an OLTP system and is good for transaction processing rather than analytical processing.
25) What is the function of BW statistics cube?
A) BW statistics cube contains the data related to the reporting performance and the data loads of all the InfoCubes in the BW system.
26) When an ODS is in 'overwrite' mode, does uploading the same data again and again create new entries in the change log each time data is uploaded?
A) No.
27) What is the function of 'selective deletion' tab in the manage->contents of an infocube?
A) It allows us to select a particular value of a particular field and delete its contents.
28) When we collapse an infocube, is the consolidated data stored in the same infocubeinfocube? or is it stored in the new
A) Data is stored in the same cube.
29) What is the effect of aggregation on the performance? Are there any negative effects on the performance?
A) Aggregation improves the performance in reporting.
30) What happens when you load transaction data without loading master data?
A) The transaction data gets loaded and the master data fields remain blank.
31) When given a choice between a single infocube and multiple InfoCubes with a multiprovider, what factors does one need to consider before making a decision?
A) One would have to see if the InfoCubes are used individually. If these cubes are often used individually, then it is better to go for a multiprovider with many cubes since the reporting would be faster for an individual cube query rather than for a big cube with lot of data.
32) How many hierarchy levels can be created for a characteristic info object?
A) Maximum of 98 levels.
33) What is open hub service?
A) The open hub service enables you to distribute data from an SAP BW system into external data marts, analytical applications, and other applications. With this, you can ensure controlled distribution using several systems. The central object for the export of data is the Infospoke. Using this, you can define the object from which the data comes and into which target it is transferred. Through the open hub service, SAP BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.
34) What is the function of 'reconstruction' tab in an infocube?
A) It reconstructs the deleted requests from the infocube. If a request has been deleted and later someone wants the data records of that request to be added to the infocube, one can use the reconstruction tab to add those records. It goes to the PSA and brings the data to the infocube.
35) What are secondary indexes with respect to InfoCubes?
A) Index created in addition to the primary index of the infocube. When you activate a table in the ABAP Dictionary, an index is created on the primary key fields of the table. Further indexes created for the table are called secondary indexes.
36) What is DB connect and where is it used?
A) DB connect is database connecting piece of program. It is used in connecting third party tools with BW for reporting purpose.
37) Can we extract hierarchies from R/3 for CO-PA?
A) No We cannot, NO hierarchies in CO/PA�?.
38) Explain field name for partitioning in CO-PA
A) The CO/PA partitioning is used to decrease package size (eg: company code)
39) What is V3 update method ?
A) It is a program in R/3 source system that schedules batch jobs to update extract structure to data source collectively.
40) Differences between serialized and non-serialized V3 updates
41) What is the common method of finding the tables used in any R/3 extraction
A) By using the transaction LISTSCHEMA we can navigate the tables.
42) Differences between table view and infoset query
A) An InfoSet Query is a query using flat tables.
43) How to load data from one InfoCube to another InfoCube ?
A) Thro DataMarts data can be loaded from one InfoCube to another InfoCube.
44) What is the significance of setup tables in LO extractions ?
A) It adds the Selection Criteria to the LO extraction.
45) Difference between extract structure and datasource
A) In Datasource we define the data from diff source sys,where as in extract struct it contains the replicated data of datasource n where in we can define extract rules, n transfer rules
B) Extract Structure is a record layout of InfoObjects.
C) Extract Structure is created on SAP BW system.
46) What happens internally when Delta is Initialized
47) What is referential integrity mechanism ?
A) Referential integrity is the property that guarantees that values from one column depend on values from another column.This property is enforced through integrity constraints.
48) What is activation of extract structure in LO ?
49) What is the difference between Info IDoc and data IDoc ?
50) What is D-Management in LO ?
A) It is a method used in delta update methods, which is based on change log in LO.
Plz experts.. provide the answers for the questions..
Thanx in advance.
SunilHi,
In my case i dont have an experience in BW. I went straight to academy.It is like i am starting a new career.Are the questions also apply to me . -
Best way to implement a word frequency counter (input = textfile)?
i had this for an interview question and basically came up with the solution where you use a hash table...
//create hash table
//bufferedreader
//read file in,
//for each word encountered, create an object that has (String word, int count) and push into hash table
//then loop and read out all the hash table entries
===skip this stuff if you dont feel like reading too much
then the interviewer proceeded to grill me on why i shouldn't use a tree or any other data structure for that matter... i was kidna stumped on that.
also he asked me what happens if the number of words exceed the capacity of the hash table? i said you can increase the capacity of the hash table, but it doesn't sound too efficient and im not sure how much you know how to increase it by. i had some ok solutions:
1. read the file thru once, and get the number of words in the file, set the hashtable capacity to that number
2. do #1, but run anotehr alogrithm that will figure out distinct # of words
3. separate chaining
===
anyhow what kind of answeres/algorithms would you guys have come up with? thanks in advance.i had this for an interview question and basically
came up with the solution where you use a hash
table...
//create hash table
//bufferedreader
//read file in,
//for each word encountered, create an object thatWell, first you need to check to make sure the word is not already in the hashtable, right? And if it is there, you need to increment the count.
has (String word, int count) and push into hash
table
//then loop and read out all the hash table entries
===skip this stuff if you dont feel like reading too
much
then the interviewer proceeded to grill me on why i
shouldn't use a tree or any other data structure for
that matter... i was kidna stumped on that.A hashtable has ammortized O(1) time for insert and search. A balanced binary search tree has O(log n) complexity for the same operations. So, a hashtable will be faster for large number of words. The other option is a so-called "trie" (google for more), which has O(log m) complexity, where m is the length of the longest word. So if your words aren't too long, a trie may be just as fast as a hashtable. The trie may also use less memory than the hashtable.
also he asked me what happens if the number of words
exceed the capacity of the hash table? i said you can
increase the capacity of the hash table, but it
doesn't sound too efficient and im not sure how much
you know how to increase it by. i had some ok
solutions:The hashmap implementation that comes with Java grows automatically, you don't need to worry about it. It may not "sound" efficient to have to copy the entire datastructure, the copy happens quickly, and occurs relatively infrequently compared with the number of words you'll be inserting.
1. read the file thru once, and get the number of
words in the file, set the hashtable capacity to that
number
2. do #1, but run anotehr alogrithm that will figure
out distinct # of words
3. separate chaining
===
anyhow what kind of answeres/algorithms would you
guys have come up with? thanks in advance.I would do anything to avoid making two passes over the data. Assuming you're reading it from disk, most of the time will be spent reading from disk, not inserting to the hashtable. If you really want to size the hashtable a priori, you can make it so its big enough to hold all the words in the english language, which IIRC is about 20,000.
And relax, you had the right answer. I used to work in this field and this is exactly how we implemented our frequency counter and it worked perfectly well. Don't let these interveiewers push you around, just tell them why you thought hashtable was the best choice; show off your analytical skills! -
WPF How can I implement the INotifyPropertyChanged in a Three-tier architecture?
I am a student and I am confused on using the INotifyPropertyChanged in a three-tier style of coding. Can you guys help me a bit with these?
I have a solution named MetroAppProject. It is composed of four projects (I omitted the using clauses and references, just imagine they are there and are working fine):
1. MetroApp.BluePrints - a class library composed of the classes in my sql db
An example of my class
namespace MetroApp.BluePrints
public partial class Patient
public long Id { get; set; }
public string PatientNumber { get; set; }
public string LastName { get; set; }
public string FirstName { get; set; }
public string MiddleName { get; set; }
public string AddressLine1 { get; set; }
public Nullable<short> CityId { get; set; }
public string CityName { get; set; }
public Nullable<short> ProvinceId { get; set; }
public string ProvinceName { get; set; }
Then the second project:
2. MetroApp.DataAccess = a class library composed of methods that calls my sql procedures. I used the SqlHelper class which contains the connection strings and other stuffs.
example class
namespace MetroApp.DataAccess
public class PatientDb
public Patient Retrieve(PatientParams parameters)
SqlCommand command = new SqlCommand();
Patient singItem = new Patient();
command.CommandText = "RetrievePatients";
command.CommandType = CommandType.StoredProcedure;
command.Parameters.AddWithValue("@Id", parameters.Id).Direction = ParameterDirection.Input;
DataTable dt = SqlHelper.GetData(command);
if (dt.Rows.Count > 0)
DataRow row = dt.Rows[0];
singItem.Id = TDefaultValue.GetInt(row["Id"].ToString());
singItem.PatientNumber = TDefaultValue.GetString(row["PatientNumber"].ToString());
singItem.LastName = TDefaultValue.GetString(row["LastName"].ToString());
singItem.FirstName = TDefaultValue.GetString(row["FirstName"].ToString());
singItem.MiddleName = TDefaultValue.GetString(row["MiddleName"].ToString());
singItem.AddressLine1 = TDefaultValue.GetString(row["AddressLine1"].ToString());
singItem.CityId = TDefaultValue.GetShort(row["CityId"].ToString());
singItem.CityName = TDefaultValue.GetString(row["CityName"].ToString());
singItem.ProvinceId = TDefaultValue.GetShort(row["ProvinceId"].ToString());
singItem.ProvinceName = TDefaultValue.GetString(row["ProvinceName"].ToString());
return singItem;
public List<Patient> RetrieveMany(PatientParams parameters)
var items = new List<Patient>();
var command = new SqlCommand();
command.CommandText = "RetrievePatients";
command.CommandType = CommandType.StoredProcedure;
command.Parameters.AddWithValue("@Id", parameters.Id).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@PatientNumber", parameters.PatientNumber).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@LastName", parameters.LastName).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@FirstName", parameters.FirstName).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@MiddleName", parameters.MiddleName).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@CityId", parameters.CityId).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@ProvinceId", parameters.ProvinceId).Direction = ParameterDirection.Input;
DataTable dt = SqlHelper.GetData(command);
foreach (DataRow row in dt.Rows)
var item = new Patient();
item.Id = TDefaultValue.GetLong(row["Id"].ToString());
item.PatientNumber = (row["PatientNumber"].ToString());
item.LastName = (row["LastName"].ToString());
item.FirstName = (row["FirstName"].ToString());
item.MiddleName = (row["MiddleName"].ToString());
item.AddressLine1 = (row["AddressLine1"].ToString());
item.CityId = (short)row["CityId"].ToString();
item.CityName = (row["CityName"].ToString());
item.ProvinceId = (short)row["ProvinceId"].ToString();
item.ProvinceName = (row["ProvinceName"].ToString());
items.Add(item);
return items;
public bool Insert(Patient entity, int userId, ref bool doesExist)
var command = new SqlCommand();
try
command.CommandText = "AddPatient";
command.CommandType = CommandType.StoredProcedure;
command.Parameters.AddWithValue("@PatientNumber", entity.PatientNumber).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@LastName", entity.LastName).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@FirstName", entity.FirstName).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@MiddleName", entity.MiddleName).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@AddressLine1", entity.AddressLine1).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@CityId", entity.CityId).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@ProvinceId", entity.ProvinceId).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@Id", entity.Id).Direction = ParameterDirection.Input;
command.Parameters.Add("@DoesExist", SqlDbType.Bit).Direction = ParameterDirection.Output;
int result = SqlHelper.ExecuteNonQuery(command);
doesExist = (bool)(command.Parameters["@DoesExist"].Value);
entity.Id = (int)(command.Parameters["@Id"].Value);
if (result == 0 || doesExist)
return false;
return true;
catch (Exception)
return false;
public bool Update(Patient entity, int userId, ref bool doesExist)
var command = new SqlCommand();
try
command.CommandText = "EditPatient";
command.CommandType = CommandType.StoredProcedure;
command.Parameters.AddWithValue("@PatientNumber", entity.PatientNumber).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@LastName", entity.LastName).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@FirstName", entity.FirstName).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@MiddleName", entity.MiddleName).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@AddressLine1", entity.AddressLine1).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@CityId", entity.CityId).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@ProvinceId", entity.ProvinceId).Direction = ParameterDirection.Input;
command.Parameters.AddWithValue("@Id", SqlDbType.Int).Direction = ParameterDirection.Output;
command.Parameters.Add("@DoesExist", SqlDbType.Bit).Direction = ParameterDirection.Output;
doesExist = (bool)(command.Parameters["@DoesExist"].Value);
int result = SqlHelper.ExecuteNonQuery(command);
if (result == 0 || doesExist)
return false;
return true;
catch (Exception)
return false;
Then a business logic
3. MetroApp.BusinessLogic = class libray for calling the methods from DataAccess
namespace MetroApp.BusinessLogic
public class PatientMgr
#region Fields
private readonly PatientDb _db;
#endregion
#region Properties
public Patient Entity { get; set; }
public List<Patient> EntityList { get; set; }
public PatientParams Parameters { get; set; }
#endregion
#region Constructors
public PatientMgr()
_db = new PatientDb();
Entity = new Patient();
EntityList = new List<Patient>();
Parameters = new PatientParams();
#endregion
#region Methods
public Patient Retrieve(PatientParams parameters)
return _db.Retrieve(parameters);
public List<Patient> RetrieveMany(PatientParams parameters)
return _db.RetrieveMany(parameters);
public bool Insert(Patient entity, int userId, ref bool doesExist)
return _db.Insert(entity, userId, ref doesExist);
public bool Update(Patient entity, int userId, ref bool doesExist)
return _db.Update(entity, userId, ref doesExist);
#endregion
Then the last one, the WPF GUI
<UserControl x:Class="MetroDentProject.Pages.PatientDetailsPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:dims="clr-namespace:MetroAppProject.UserCons"
mc:Ignorable="d"
d:DesignHeight="720" d:DesignWidth="1280">
<Grid x:Name="MainGrid" >
<Grid.RowDefinitions>
<RowDefinition Height="40"/>
<RowDefinition />
<RowDefinition />
<RowDefinition />
<RowDefinition />
<RowDefinition />
<RowDefinition />
<RowDefinition />
<RowDefinition />
<RowDefinition />
<RowDefinition />
<RowDefinition />
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition />
<ColumnDefinition />
<ColumnDefinition />
</Grid.ColumnDefinitions>
<GroupBox Grid.Column="0" Grid.Row="1" Grid.RowSpan="7" x:Name="DetailsGroupBox" Header="Patient Details" >
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
</Grid.RowDefinitions>
<TextBlock Text="Id: " Grid.Column="1" Grid.Row="0" Visibility="Collapsed"/>
<TextBox x:Name="IdTextBox" Grid.Column="1" Grid.Row="1" Visibility="Collapsed"/>
<TextBlock x:Name="PatientNumberTextBlock" Text="Patient Number: " Grid.Column="0" Grid.Row="0" />
<TextBox x:Name="PatientNumberTextBox" Grid.Column="1" Grid.Row="0" IsReadOnly="True" IsReadOnlyCaretVisible="True"/>
<TextBlock Text="Last Name: " Grid.Column="0" Grid.Row="1" />
<TextBox x:Name="LastNameTextBox" Grid.Column="1" Grid.Row="1" />
<TextBlock Text="First Name: " Grid.Column="0" Grid.Row="2" />
<TextBox x:Name="FirstNameTextBox" Grid.Column="1" Grid.Row="2" />
<TextBlock Text="Middle Name: " Grid.Column="0" Grid.Row="3" />
<TextBox x:Name="MiddleNameTextBox" Grid.Column="1" Grid.Row="3" />
</Grid>
</GroupBox>
<GroupBox x:Name="ContactDetailsGroupBox" Header="Contact Details" Grid.Column="1" Grid.Row="1" Grid.RowSpan="7">
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition/>
<ColumnDefinition />
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
</Grid.RowDefinitions>
<TextBlock Text="Address: " Grid.Column="0" Grid.Row="0" Grid.RowSpan="2" />
<TextBlock Text="City: " Grid.Column="0" Grid.Row="2" />
<TextBlock Text="Province: " Grid.Column="0" Grid.Row="3"/>
<TextBox x:Name="AddressTextBox" Grid.Column="1" Grid.Row="0" Grid.RowSpan="2"
TextWrapping="Wrap"
AcceptsReturn="True"
VerticalScrollBarVisibility="Auto"
/>
<ComboBox x:Name="CitiesComboBox" Grid.Column="1" Grid.Row="2" />
<ComboBox x:Name="ProvincesComboBox" Grid.Column="1" Grid.Row="3" />
</Grid>
</GroupBox>
<dims:FunctionButtonsControl x:Name="FunctionButtonsCon" Grid.Row="9" Grid.Column="0" Grid.ColumnSpan="2"
ExecuteClick="FunctionButtonsCon_OnExecuteClick"
UndoClick="FunctionButtonsCon_OnUndoClick"
BackClick="FunctionButtonsCon_OnBackClick"
DeleteClick="FunctionButtonsCon_OnDeleteClick"
/>
</Grid>
</UserControl>
I apologize for the long post. As you can see, I don't use binding. Binding requires me to use INotifyPropertyChanged interface which I am not familiar. Can you at least make my project to implement the INotifypropertyChanged?
Here is my sample code for the WPF page:
public partial class PatientDetailsPage
readonly PatientMgr itemMgr = new PatientMgr();
public PatientParams CurrentPar = new PatientParams(); // for undoActionType _action = ActionType.Insert; // this is an enum from another project, ActionType.Insert, ActionType.Update
public ActionType Action
get { return _action; }
set { _action = value; }
public PatientDetailsPage()
InitializeComponent();
BindComboBoxes();
#region Methods
public void OnFragmentNavigation(FragmentNavigationEventArgs e)
public void OnNavigatedFrom(NavigationEventArgs e)
public void OnNavigatedTo(NavigationEventArgs e)
{ Setup();
public void OnNavigatingFrom(NavigatingCancelEventArgs e)
public Patient GetPageEntity()
Patient setEntity = new Patient();
setEntity.Id = (long)IdTextBox.Text;
setEntity.PatientNumber = PatientNumberTextBox.Text;
setEntity.LastName = LastNameTextBox.Text;
setEntity.FirstName = FirstNameTextBox.Text;
setEntity.MiddleName = MiddleNameTextBox.Text;
setEntity.AddressLine1 = AddressTextBox.Text;
setEntity.CityId = (short)CitiesComboBox.SelectedValue);
setEntity.ProvinceId = (short)ProvincesComboBox.SelectedValue;
setEntity.StatusId = true;
return setEntity;
public void Setup()
switch (Action)
case ActionType.Insert:
Clearer(); //clears all textboxes and set all comboboxes to default
this.PatientNumberTextBlock.Visibility = Visibility.Collapsed;
this.PatientNumberTextBox.Visibility = Visibility.Collapsed;
FunctionButtonsCon.ExecuteButton.Content = "Add";
FunctionButtonsCon.DeleteButton.IsEnabled = false;
FunctionButtonsCon.DeleteButton.Visibility = Visibility.Hidden;
break;
//**Setup Update
case ActionType.Update:CurrentPar.Id = (long)IdTextBox.Text;
LoadSingle(CurrentPar);
this.PatientNumberTextBlock.Visibility = Visibility.Visible;
this.PatientNumberTextBox.Visibility = Visibility.Visible;
FunctionButtonsCon.ExecuteButton.Content = "Save";
FunctionButtonsCon.DeleteButton.IsEnabled = true;
FunctionButtonsCon.DeleteButton.Visibility = Visibility.Visible;
break;
LastNameTextBox.CaretIndex = LastNameTextBox.Text.Length;
IsVisibleChanged += AutoFocus;
public void LoadSingle(PatientParams parameters)
var entity = itemMgr.Retrieve(parameters); //calls the BusinessLogic
IdTextBox.Text = (entity.Id);
PatientNumberTextBox.Text = (entity.PatientNumber);
LastNameTextBox.Text = (entity.LastName);
FirstNameTextBox.Text = (entity.FirstName);
MiddleNameTextBox.Text = (entity.MiddleName);
AddressTextBox.Text = (entity.AddressLine1);
CitiesComboBox.SelectedValue = (short)entity.CityId;
ProvincesComboBox.SelectedValue = (short)entity.ProvinceId;
public void Save(ActionType action, int userId)
itemMgr.Entity = GetPageEntity();
bool doesExist = false;
switch (action)
case ActionType.Insert:
if (itemMgr.Insert((itemMgr.Entity), userId, ref doesExist))
System.Windows.Forms.MessageBox.Show("Successfully added a Patient!", "Patient Insertion");
else if (doesExist)
System.Windows.Forms.MessageBox.Show("Item already exists.", "Patient Insertion");
else
System.Windows.Forms.MessageBox.Show("Not all fields were filled in.", "Patient Insertion");
break;
case ActionType.Update:
if (itemMgr.Update(itemMgr.Entity, userId, ref doesExist))
System.Windows.Forms.MessageBox.Show("Successfully updated a Patient!", "Patient Modification");
itemMgr.Parameters.Id = itemMgr.Entity.Id;
Action = ActionType.Update;
Setup();
else if (doesExist)
System.Windows.Forms.MessageBox.Show("Item already exists.", "Patient Modification");
else
System.Windows.Forms.MessageBox.Show("Not all fields were filled in.", "Patient Modification");
break;
public void Clearer()
IdTextBox.Clear();
PatientNumberTextBox.Clear();
LastNameTextBox.Clear();
FirstNameTextBox.Clear();
MiddleNameTextBox.Clear();
CitiesComboBox.SelectedIndex = 0;
ProvincesComboBox.SelectedIndex = 0;
AddressTextBox.Clear();
public void BindComboBoxes()
CitiesComboBox.ItemsSource = new BindingSource(CommonMgr.GetCitiesDropDown(), null);// the CommonMgr is a static class from another project. It works just fine
CitiesComboBox.DisplayMemberPath = "Value";
CitiesComboBox.SelectedValuePath = "Key";
ProvincesComboBox.ItemsSource = new BindingSource(CommonMgr.GetProvincesDropDown(), null);
ProvincesComboBox.DisplayMemberPath = "Value";
ProvincesComboBox.SelectedValuePath = "Key";
CitiesComboBox.SelectedIndex = 0;
ProvincesComboBox.SelectedIndex = 0;
#endregion
#region Events
private void FunctionButtonsCon_OnExecuteClick(object sender, RoutedEventArgs e)
Save(Action, SessionHelper.MyUser.Id); //SessionHelper.MyUser.Id
private void FunctionButtonsCon_OnUndoClick(object sender, RoutedEventArgs e)
if (Action == ActionType.Insert)
Clearer();
return;
private void FunctionButtonsCon_OnBackClick(object sender, RoutedEventArgs e)
Exiter();
private void FunctionButtonsCon_OnDeleteClick(object sender, RoutedEventArgs e)
var ans = System.Windows.Forms.MessageBox.Show("Are you sure you want to delete this entry?", "Patient Deletion", MessageBoxButtons.YesNo);
if (!Equals(ans, System.Windows.Forms.DialogResult.Yes)) return;
Action = ActionType.Delete;
Save(Action, SessionHelper.MyUser.Id);
Exiter();
#endregionHello Kokombads,
I thought you are using MVVM from your title but it seems your project is just a simple WPF project. In that way, please check the following msdn article to know how to Implement Property Change Notification
https://msdn.microsoft.com/en-us/library/ms743695(v=vs.110).aspx
using System.ComponentModel;
namespace SDKSample
// This class implements INotifyPropertyChanged
// to support one-way and two-way bindings
// (such that the UI element updates when the source
// has been changed dynamically)
public class Person : INotifyPropertyChanged
private string name;
// Declare the event
public event PropertyChangedEventHandler PropertyChanged;
public Person()
public Person(string value)
this.name = value;
public string PersonName
get { return name; }
set
name = value;
// Call OnPropertyChanged whenever the property is updated
OnPropertyChanged("PersonName");
// Create the OnPropertyChanged method to raise the event
protected void OnPropertyChanged(string name)
PropertyChangedEventHandler handler = PropertyChanged;
if (handler != null)
handler(this, new PropertyChangedEventArgs(name));
It is not so complex, you only need to refer to the interface from here:
https://msdn.microsoft.com/en-us/library/system.componentmodel.inotifypropertychanged(v=vs.110).aspx
And understand that you have to do the following:
For change notification to occur in a binding between a bound client and a data source, your bound type should either:
Implement the INotifyPropertyChanged interface (preferred).
Provide a change event for each property of the bound type
Best regards,
Barry
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
SBO Ebook: Certification and Interview Questions and Answers
Hi, please permit me to use this forum to introduce you to this ebook titled: [SAP BUSINESS ONE SOLUTION CONSULTANT CERTIFICATION REVIEW AND INTERVIEW: QUESTIONS, ANSWERS AND EXPLANATIONS|http://www.ebookmall.com/ebook/277772-ebook.htm]
This book consists of real life and scenario based review questions and answers on SAP Business One solution certification examinations with Booking Codes/Certification ID: C_TB1200_04, C_TB1200_05 and C_TB1200_07. It covers the SAP Business One Solution Consultant curriculum namely:
 TB 1000 - SAP Business One Logistics
 TB 1100 - SAP Business One Accounting
 TB 1200 - SAP Business One Implementation and Support
The book is targeted at:
 SAP Business One Consultants preparing for the Solution certification exams (C_TB1200_04, C_TB1200_05 and C_TB1200_07)
 SAP Business One Solution Consultant Job Seekers
 SAP Business One Solution Consultant recruiters
 SAP Business One Implementation team
 SAP Business One Project Managers
In this book, you will find:
 SAP Business One Solution Consultant Certification Areas of Concentration (AoC)
 SAP Business One Solution Consultant Certification Curriculum
 Things you must know about the SAP Business One Solution Consultant Certification Examination.
 SAP Business One Certification review questions and detailed answers
 SAP Business One Interview questions and detailed answers
It can be downloaded at http://www.ebookmall.com/ebook/277772-ebook.htm.Hi Dan,
Thanks for your observation, comment and review.
1. As a matter of fact, since many features of SAP B1 has not changed, just as you asserted, I took cognizance of new enhancements to the solution over the various releases, especially as it relates to the functionalities. By extension however, most questions for prior releases applies to the successor release.
Furthermore, there was a mix-up in the download. Ideally, you should have three sections in the book. Section I (release 2004, by extension - 2005 and 2007 releases); Section II (release 2005, by extension - 2007 release) and Section III (release 2007). The document has been reviewed. Hence, everyone that bought the book before 29th of April 2008 should visit my [blog|http://blogs.ittoolbox.com/sap/kehinde/archives/sap-business-one-solution-consultant-ebook-review-notice-24053] on how to get a copy of the revised version within 24 hours at no extra cost. I regret any inconveniences. PLEASE DO NOT LEAVE YOUR EMAIL ADDRESS ON THIS FORUM.
2. On localization, SAP Business One has more than 10,000 installations across the world. The book is not intended to be "localization specific". It is intended to serve as a certification review for functionalities that cuts across board with a mix of localized functionalities. I am sure you found in there a number of localization questions for other countries like UK. My advice for individuals using the book is to identify which questions apply to their localization.
While I await your review of the [revised version|http://www.ebookmall.com/ebook/277772-ebook.htm] as an SAP Business One advisor, I believe you will agree with me that it is an invaluable resource for preparing for the certification exam and also technical interview sessions.
Thanks -
Organization Management Interview Questions and Answers Extremely Urgent
Hi,
Please let me know Organization Management Interview Questions and Answers. MOST MOST URGENT
Please do not post Link or website name and detail response will be highly appreciated.
Very Respectfully,
Sameer.
SAP HR .Hi there,
Pl. find herewith the answers of the questions posted on the forum.
1. What are plan versions used for?
Ans : Plan versions are scenarios in which you can create organizational plans.
In the plan version which you have flagged as the active plan version, you create your current valid organizational plan. This is also the integration plan version which will be used if integration with Personnel Administration is active.
You use additional plan versions to create additional organizational plans as planning scenarios.
As a rule, a plan version contains one organizational structure, that is, one root organizational unit. It is, however, possible to create more than one root organizational unit, that is more than one organizational structure in a plan version.
For more information on creating plan versions, see the Implementation Guide (IMG), under Personnel Management  Global Settings in Personnel Management  Plan Version Maintenance.
2. What are the basic object types?
Ans. An organization object type has an attribute that refers to an object of the organization management (position, job, user, and so on). The organization object type is linked to a business object type.
Example
The business object type BUS1001 (material) has the organization object type T024L (laboratory) as the attribute that on the other hand has an object of the organization management as the attribute. Thus, a specific material is linked with particular employees using an assigned laboratory.
3. What is the difference between a job and a position?
Ans. Job is not a concrete, it is General holding various task to perform which is generic.(Eg: Manager, General Manager, Executive).
Positions are related to persons and Position is concrete and specific which are occupied by Persons. (Eg: Manager - HR, GM HR, Executive - HR).
4. What is the difference between an organizational unit and a work centre?
Ans. Work Centre : A work center is an organizational unit that represents a suitably-equipped zone where assigned operations can be performed. A zone is a physical location in a site dedicated to a specific function.
Organization Unit : Organizational object (object key O) used to form the basis of an organizational plan. Organizational units are functional units in an enterprise. According to how tasks are divided up within an enterprise, these can be departments, groups or project teams, for example.
Organizational units differ from other units in an enterprise such as personnel areas, company codes, business areas etc. These are used to depict structures (administration or accounting) in the corresponding components.
5. Where can you maintain relationships between objects?
Ans. Infotype 1001 that defines the Relationships between different objects.
There are many types of possible relationships between different objects. Each individual relationship is actually a subtype or category of the Relationships infotype.
Certain relationships can only be assigned to certain objects. That means that when you create relationship infotype records, you must select a relationship that is suitable for the two objects involved. For example, a relationship between two organizational units might not make any sense for a work center and a job.
6. What are the main areas of the Organization and Staffing user interfaces?
Ans. You use the user interface in the Organization and Staffing or Organization and Staffing (Workflow) view to create, display and edit organizational plans.
The user interface is divided into various areas, each of it which fulfills specific functions.
Search Area
Selection Area
Overview Area
Details Area
Together, the search area and the selection area make up the Object Manager.
7. What is Expert Mode used for?
Ans. interface is used to create Org structure. Using Infotypes we can create Objects in Expert mode and we have to use different transactions to create various types of objects. If the company needs to create a huge structure, we will use Simple maintenance, because it is user friendly that is it is easy to create a structure, the system automatically relationship between the objects.
8. Can you create cost centers in Expert Mode?
Ans. Probably not. You create cost center assignments to assign a cost center to an organizational unit, or position.
When you create a cost center assignment, the system creates a relationship record between the organizational unit or position and the cost center. (This is relationship A/B 011.) No assignment percentage record can be entered.
9. Can you assign people to jobs in Expert Mode?
10. Can you use the organizational structure to create a matrix organization?
Ans. By depicting your organizational units and the hierarchical or matrix relationships between them, you model the organizational structure of your enterprise.
This organizational structure is the basis for the creation of an organizational plan, as every position in your enterprise is assigned to an organizational unit. This defines the reporting structure.
11. In general structure maintenance, is it possible to represent the legal entity of organizational units?
12. What is the Object Infotype (1000) used for?
Ans. Infotype that determines the existence of an organizational object.
As soon as you have created an object using this infotype, you can determine additional object characteristics and relationships to other objects using other infotypes.
To create new objects you must:
Define a validity period for the object
Provide an abbreviation to represent the object
Provide a brief description of the object
The validity period you apply to the object automatically limits the validity of any infotype records you append to the object. The validity periods for appended infotype records cannot exceed that of the Object infotype.
The abbreviation assigned to an object in the system renders it easily identifiable. It is helpful to use easily recognizable abbreviations.
You can change abbreviations and descriptions at a later time by editing object infotype records. However, you cannot change an objects validity period in this manner. This must be done using the Delimit function.
You can also delete the objects you create. However, if you delete an object the system erases all record of the object from the database. You should only delete objects if they are not valid at all (for example, if you create an object accidentally)
13. What is the Relationships Infotype (1001) used for?
Ans. Infotype that defines the Relationships between different objects.
You indicate that a employee or user holds a position by creating a relationship infotype record between the position and the employee or user. Relationships between various organizational units form the organizational structure in your enterprise. You identify the tasks that the holder of a position must perform by creating relationship infotype records between individual tasks and a position.
Creating and editing relationship infotype records is an essential part of setting up information in the Organizational Management component. Without relationships, all you have are isolated pieces of information.
You must decide the types of relationship record you require for your organizational structure.
If you work in Infotype Maintenance, you must create relationship records manually. However, if you work in Simple Maintenance and Structural Graphics, the system creates certain relationships automatically.
14. Which status can Infotypes in the Organizational Management component have?
Ans. Once you have created the basic framework of your organizational plan in Simple Maintenance, you can create and maintain all infotypes allowed for individual objects in your organizational plan. These can be the basic object types of Organizational Management organizational unit, position, work center and task. You can also maintain object types, which do not belong to Organizational Management.
15. What is an evaluation path?
Ans. An evaluation path describes a chain of relationships that exists between individual organizational objects in the organizational plan.
Evaluation paths are used in connection with the definition of roles and views.
The evaluation path O-S-P describes the relationship chain Organizational unit > Position > Employee.
Evaluation paths are used to select other objects from one particular organizational object. The system evaluates the organizational plan along the evaluation path.
Starting from an organizational unit, evaluation path O-S-P is used to establish all persons who belong to this organizational unit or subordinate organizational units via their positions.
16. What is Managers Desktop used for?
Ans. Manager's Desktop assists in the performance of administrative and organizational management tasks. In addition to functions in Personnel Management, Manager's Desktop also covers other application components like Controlling, where it supports manual planning or the information system for cost centers.
17. Is it possible to set up new evaluation paths in Customizing?
Ans. You can use the evaluation paths available or define your own. Before creating new evaluation paths, check the evaluation paths available as standard.
18. Which situations require new evaluation paths?
Ans. When using an evaluation path in a view, you should consider the following:
Define the evaluation path in such a manner that the relationship chain always starts from a user (object type US in Organizational Management) and ends at an organizational unit, a position or a user.
When defining the evaluation path, use the Skip indicator in order not to overload the result of the evaluation.
19. How do you set up integration between Personnel Administration and Organizational Management?
Ans. Integration between the Organizational Management and Personnel Administration components enables you to,
Use data from one component in the other
Keep data in the two components consistent
Basically its relationship between person and position.
Objects in the integration plan version in the Organizational Management component must also be contained in the following Personnel Administration tables:
Tables Objects
T528B and T528T Positions
T513S and T513 Jobs
T527X Organizational units
If integration is active and you create or delete these objects in Organizational Management transactions, the system also creates or deletes the corresponding entries automatically in the tables mentioned above. Entries that were created automatically are indicated by a "P". You cannot change or delete them manually. Entries you create manually cannot have the "P" indicator (the entry cannot be maintained manually).
You can transfer either the long or the short texts of Organizational Management objects to the Personnel Administration tables. You do this in the Implementation Guide under Organizational Management -> Integration -> Integration with Personnel Administration -> Set Up Integration with Personnel Administration. If you change these control entries at a later date, you must also change the relevant table texts. To do that you use the report RHINTE10 (Prepare Integration (OM with PA)).
When you activate integration for the first time, you must ensure that the Personnel Administration and the Organizational Management databases are consistent. To do this, you use the reports:
RHINTE00 (Adopt organizational assignment (PA to PD))
RHINTE10 (Prepare Integration (PD to PA))
RHINTE20 (Check Program Integration PA - PD)
RHINTE30 (Create Batch Input Folder for Infotype 0001)
The following table entries are also required:
PLOGI PRELI in Customizing for Organizational Management (under Set Up Integration with Personnel Administration). This entry defines the standard position number.
INTE in table T77FC
INTE_PS, INTE_OSP, INTEBACK, INTECHEK and INTEGRAT in Customizing under Global Settings ® Maintain Evaluation Paths.
These table entries are included in the SAP standard system. You must not change them.
Since integration enables you to create relationships between persons and positions (A/B 008), you may be required to include appropriate entries to control the validation of these relationships. You make the necessary settings for this check in Customizing under Global Settings ® Maintain Relationships.
Sincerely,
Devang Nandha
"Together, Transform Business Process by leveraging Information Technology to Grow and Excel in Business". -
Fico interview questions and Real time tickets with resoving details
MODERATOR: Do not post (or request) email address or links to copyrighted or confidential information on these forums. If you do, the thread will be LOCKED and all points UNASSIGNED.
hi sap gurus
i have done sap-fico iam in job trails. can any body help me Fico interview questions and Real time tickets with resoving details
regards
prasad.v
Edited by: chinna prasad on Jun 5, 2008 4:10 PMHello Prasad,
Before attending interviews.....First you need to understand general things like CV writing, projects, sub modules etc., you should be gain knowledge on these concepts then you can move further.
1. CV 2.Projects 3.your strenths in sap (reading..reading...reading....practice...practice...practice)
2.you please interact with your friends who is on trails, then you can get more information like interview process, methodology, technical etc.,
I am sending some real time interview tech questions which will useful for you.
Questions:
1.When tickets are raised by end users who will give priority? After resolve the tickets who will close the status?
2.In real time at a time how many normal periods, special periods, MM periods we can open?
3.What is client dependant & Independent?
4.How to transport configuration settings from one client to another client or production client, which tools we can use for transport?
5.Why we donu2019t assign business area to company code?
6.What is the difference between General GL A/c, Control A/c, Reconciliation A/c & Offsetting A/c?
Answers:
1.The priority is generally decided by the Coordinator on the client side. After tickets are resolved, they will have to be closed by the coordinator on the customer site
2. In FI as many as you wants. In MM only 2 (current month + previous
3.Certain tables and customizations made in one client will affect the other clients also - then it is cross client i.e, client dependent. While if the changes made in one client has no impact on the other client - it is said to be client independent
4.Transports from one server to the other can be made with the help of transport requests. When a configuration is done the system generates a request number. First release the task and then release the request. Use TC-SE10 / SE09, SE09: workbench transport; SE10: customizing transport. But currently no such difference actually exists.
5.because in case of multiple company codes, same business area can be used across company codes. Business area is cross company code, means it is not confined to one company code thatu2019s why we don't assign BA to any of the company codes. It is client dependant, not company code dependant. We can pass values from one company code to any of the BA in that client.
6.General GL Account are those used for standard posting like for example Income and Expenses Accounts
- Control Account are basically used for reconciliation between modules like FI and CO, to ensure that both the modules are in sync.
- Reconciliation Account are those specific covering ADK (A-Assets, D-Customer, K-Vendor). For example a Customer Master would be mapped to a Bills Receivable Reconciliation Account and any transaction that needs to be posted are done against the customer code.
- Offsetting Account are used for variety of reasons and few examples are Intercompany Postings, at the time of Implementation when TB and Balance sheet are uploaded would be offsetted against a dummy account.
All the best.....dont forget and pl assign points if useful and if u have any querries pl revert back
thanks
Anil -
Dataguard Interview questions and most frequeently asked DG issues
Hi Gurus,
I am new to this forum and happy to be a part of it.
Can someone help me in posting " Dataguard Interview questions and most frequeently asked DG issues" as i am preparing for the interviews.
And also share the enhancements to DG in 11g.
Thanks in advance.I'm not impressed by any of the questions at any of the linked sites provided so here are the one's I would expect someone to be able to answer:
1. What is the difference between Physical and Logical Data Guard in terms of how they work and how they are used in the enterprise?
2. What is the difference between vanilla Data Guard and Active and Snapshot?
3. How do you enable the Data Guard Broker process and why and when would you want to?
4. Who is Larry Carpenter and why should you care?
5. What parameters in using orapwd are critical for success?
6. What would you recommend as a value for SEND_BUF_SIZE and why?
7. Given a primary production database, and a full RMAN backup that takes 1 hour to fully restore ... how long would it take you to implement Physical Data Guard from the time you started working?
If the answer to question 7 is greater than 2 hours ... my assumption would be that you probably had never done it before and it wouldn't matter that you got the first six questions correct. -
I have 1 year experience in an implementation project.I am pasting my responsibilities below.Can you tell me the possible interview questions based on these responsibilities.
Was part of the Organization Structure Workshop and General requirement Workshop to understand the as- is to-be study
Was involved in the Business Blueprint document preparation
Conducted Gap analysis and mapped business processes with SAP R3
Was responsible for maintaining global settings - Define and assign the enterprise structure, Define Company, areas of business etc
Successfully configured settings for Company code, Fiscal year variant, Chart of accounts, posting period variant and field status variant
Defined Account Groups, Number Ranges, Maintain Field Status Groups, and Payment terms
Created Bank Master data, House banks, G/L accounts for each bank account and create reconciliation accounts for vendors, customers and assets
Worked on A/P & A/R and configure customer and vendor master data, local and global data, number ranges, payment types, manual and automatic payment methods, partial payments, residual payments, discount payments
Configured Dunning Procedures taking credit worthiness of customers into account and made applicable procedure to customers
In CO Configured settings for Controlling Area
Created Primary and Secondary Cost Elements, created Cost Element Groups, in Cost Element Accounting
Conducted user training after the implementation and supported the client in post launch sessionI have 1 year experience in an implementation project.I am pasting my responsibilities below.Can you tell me the possible interview questions based on these responsibilities.
Was part of the Organization Structure Workshop and General requirement Workshop to understand the as- is to-be study
Was involved in the Business Blueprint document preparation
Conducted Gap analysis and mapped business processes with SAP R3
Was responsible for maintaining global settings - Define and assign the enterprise structure, Define Company, areas of business etc
Successfully configured settings for Company code, Fiscal year variant, Chart of accounts, posting period variant and field status variant
Defined Account Groups, Number Ranges, Maintain Field Status Groups, and Payment terms
Created Bank Master data, House banks, G/L accounts for each bank account and create reconciliation accounts for vendors, customers and assets
Worked on A/P & A/R and configure customer and vendor master data, local and global data, number ranges, payment types, manual and automatic payment methods, partial payments, residual payments, discount payments
Configured Dunning Procedures taking credit worthiness of customers into account and made applicable procedure to customers
In CO Configured settings for Controlling Area
Created Primary and Secondary Cost Elements, created Cost Element Groups, in Cost Element Accounting
Conducted user training after the implementation and supported the client in post launch session -
Efficient data structure to implement simple text editor?
I was given this problem in an interview:
What data structure would you use to implement a simple text editor that does these 4 functions:
a) goto(line number)
b) insert(char input,location)
c) delete(location)
d) printAll() //print entire file
Given that i'm such a newb, i was stumped. I came up with making a 2d array that would allow for o(1) time for goto, but o(n) for everything else (shifting everything in the array). there were other downfalls too dealing with space issues and such, but he wanted me to optimize this data structure. I then came up with a linked list of arrays, but that had similar problems.
But thinking about it further is driving me a little crazy so I'm wondering if you guys have any suggestions on how to answer this question...
one thing that came to mind after is to implement the data structure as a binary tree, where each node contains
class Node
char theChar
int position; // ie 0 = first character in the file, and 81 could be the first character in the 2nd line
node left,right,parent;
}so how it works is the cursor would know where the location was so i would know where to delete and insert within the tree.
insert {
//search for location to insert after (log n)
//create new character node and append to current node
//increment position for all subsequent children
delete(x) {
search for x position
if found, remove node and decrement position value for all children
goto(line #) {
return line # * 80 (or whatever max length for a single line)
}the major problem i see here is balancing the tree after every insert/delete
Thanks in advance.One of many great things emacs has given us is the gap buffer:
http://en.wikipedia.org/wiki/Gap_buffer
To see a Java implementation of this you can look in the Java SDK source. The document model (I forget exactly what its called), used in Swing uses a gap buffer.
Maybe you are looking for
-
HP C4780 will not connect to wireless router
I have the HP C4780 printer and a MacBook Pro with Mountain Lion OS X 10.8.5. I have been able to install the printer software to my computer with the USB cable, and I am able to print as long as it is hooked up with the USB cable. I want to set it u
-
OS 9 Powermac G3 Won't boot from CD
My OS 9 CD won't boot on my B&W G3. I want to install OS 9 and then install OS 10.3 on top of that but I can't get it to boot from the OS9 CD. Any ideas? Message was edited by: plop1234
-
Adding e-mails to Adobe Portfolio using the Outlook Plug-in
I support a user here who is asking if it is possible to add additional emails to an Adobe portfolio using the Outlook Plug-in. The user initially creates a portfolio of emails using the plug in, but would then like to add further emails to the port
-
Express to Basestation / WDS Problems
Hi there To solve the widely discussed Aiport Problem under 10.5 i connected my iMac and a mini to two different Airport Express Stations, using the ethernet port. AirPort Cards are disabled on both machines, so technically this is an ethernet connec
-
What is a .tod file??
I have a video camera and all the videos are .tod files and .moi files. How can I get these to play on my Mac.