Historical data in SIS Reports
Hi,
We have just implemented SIS in our company last week and have been successfully able to fetch data from the date we actually went LIVE. However, as per the user requirement, they would also be interested in seeing the data for the past couple of years. Can somebody please let me know whether there is any way we can get the historicxal data in my MCSI report.
Waiting for your reply at the earliest
Thanks & Regards,
Gaurav
dear gaurav,
yeah it is very much possible to have ur historical data.
all u need to do is
got to tcodes OLI7 for updating orders, OLI8 for delievries and OLI9 for billing documents.
system updates all the data in a different version "&("
after checking th correctness of the data in this version u can copy this version to main data version which is version 000 which can be done thru tcode OLIX
u can also see more on this at following path.
sprolOLISlodistics data warehousedata basis-toolssetup statistical data..
read the help given at these application ...
regards
suresh s
award points if it helps..
Similar Messages
-
0HR_PT_2 How to get back historical data for new report time type
Hi All Expert,
We have implemented and using the 0HR_PT_2 extractor for the past whole year. The Delta is working. Recently, there is requirement to read more data from the ZL custer table, and more new BW Report Time Types are added to extract such data.
The delta doesn't pick up the the past whole year data of the new report time type that we added.
Do we need to re-initialize the load to get back those historical data every time when we add a new report time type?
Please advice and Thx
Ken
Edited by: Ken Hong on Feb 27, 2008 9:24 PM
Edited by: Ken Hong on Feb 27, 2008 9:25 PMP.s, all hidden files are shown in es file explorer as this backup folder was hidden originally. It has a '.' in front. So I'm pretty sure the folder it's gone, but as I've not erased my phone again, shouldn't the folder be somewhere in my SD card still and how can I find it using my Mac?
-
Precision Queue Interval All Fields report can only produce 14 days historical data
Hi All,
I have PCCE 9.0.4. It only had internal database. There is no external database for HDS.
I configured Precision Queue for incoming call agents. When I produce report using "Precision Queue Interval All Fields (PQIAF)" provided by Cisco Stock Report, I can only find 14 days historical data. Referring to PCCE Product Specification doc, internal logger should provide 400 days historical data for PQIAF report.
When I open the report definition, PQIAF is pointing to 3 tables: Router_Queue_Interval, Skill_Group_Interval and Precision_Queue. I found the Router_Queue_Interval only stored 14 days data. Is it normal?
Have anyone found the same problem when PCCE only used internal database?
Please advise,
HeribertusThe logger database for Enterprise only holds 14 days by default. As part of the PCCE install you should have edited the retention period to 400 days. It looks like you missed changing the Router_Queue_Interval table to 400 days.
You need to edit the logger setup. See this link for setting the retention period
http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cust_contact/contact_center/pcce/pcce_901/installation/guide/PCCE_BK_IBC40C6F_00_installing-and-configuring-pcce/PCCE_BK_ICC40C6F_00_installing-and-configuring-pcce_chapter_0111.html#PCCE_TK_L33657A9_00
Graham -
SIS Report- Old Data update Issue
Hi All,
The relevant configuration has been done for activating SIS report. Using the T-Code: OLI7,OLI8,OLI9, I have updated the data for the infostructure: S001 and copied the same to OLIX, but still the report shows only New document value in MC+E report.
I followed the below link, but eventhough the old document data is not showing in the report.
http://scn.sap.com/docs/DOC-29547
Please anyone help me to get the old data in the MC+E,MCTA report.
Regards,
MythilyHi All,
The relevant configuration has been done for activating SIS report. Using the T-Code: OLI7,OLI8,OLI9, I have updated the data for the infostructure: S001 and copied the same to OLIX, but still the report shows only New document value in MC+E report.
I followed the below link, but eventhough the old document data is not showing in the report.
http://scn.sap.com/docs/DOC-29547
Please anyone help me to get the old data in the MC+E,MCTA report.
Regards,
Mythily -
Report painter.....PCA reports & historical data issue
I need to create a report painter PCA report for P&L that will do the foll:
--> Consolidated P&L for 2 company codes
--> Company code 1 - YTD figures to include full year
--> Company code 2 - YTD numbers to include only 2nd half year as company 2 was acquired in middle of year
Problem is we are migrating the historical data for the last two years. I do not want the report to pick up this as part of the report.
Suggestions?anyone?
Thanks -
Hi,
I have a Opportunity Cube which is storing all the historical data. Now How can I create the report on this historical data like I want a report which says how many days a opportunities in each status. In CUbe it is stored the total history of the opportunity like when it is phase 1 to 2 and when it was changed every thing. I dont have any idea how to create such reports. Need guidance.
Thanks
naveenI guess, you can create a RKF for each status to get the date of change and calculate number of days between two dates. However, this is possible if you have limited/reasonable number of statuses for an opportunity.
Thanks
Viswa -
Hello All,
We have all our vendor data in MS-SQL database.
It contains historical data (vendors with whom we don't do business anymore) and also the active vendors with whom we are in business.
We are moving the active vendor into our ERP data using LSMW.
We are also planning to move the inactive vendor data into some new tables in ERP to store them.
My question is , can we do reporting on this inactive vendor data from these tables ?
Are reporting is possible only for the active transactional data?
Please through some idea as I am a beginner and I need to know this very urgently.
Thanks in advance,
Veena.Hi Sam,
I have the same query.
With BW we can do this.But in our case we have BW implementation in Phase2 so we want to acheive this historical data reporting in Phase1 in ERP.I want to know if we can report on these ztables where we store the historical data .
It a bit urgent can you please let me know if it is possible.
Thanks in advance,
VS. -
LineChart with current and historical data daily report
Hi all,
I'm new on BAM and I want to know if it is possible to define a LineChart showing information about today and the 30 last days.
Is it possible using two dataobjects, one for today data and an external one provided by ODI with the contents for the last 30 days?
Anyone can provide a sample?
Thanks in advance.Hi,
Both current and historical data are always stored within the same table. There is no option to create separate tables for each that are managed by OWM.
However, you do have the option to use dbms_wm.PurgeTable which can be used to purge a set of data, and optionally store it in a archive table. But, after that is done the data is no longer under Workspace manager's control or the versioning environment. From the description that you gave, I am unsure if this would be sufficient to satisfy your requirements. I am guessing not, but wanted to make you aware of the functionality.
Regards,
Ben -
Dear Guru's,
I have created my own SIS & now i want to update the Old data in this SIS. Currently the SIS is showing the values from the date when it was created. Can anybody tell me the steps to update the same with the old values.
Rgds,
AjinkyaHi,
Kindly refer to Glynn Williams page no :-418,it says that should one create a new structure and require historical data to be updated into it,<b>refer to</b> <b>OSS note</b> <b>number 0064636</b>.This explains the methods of procedding with statistical update.
Summary
Symptom
This note describes the standard procedure for setting up the statistical data in the Logistics Information System (LIS). However, in addition to this standard procedure, you should also read the application-specific documentation which is stored for each application (Purchasing, Sales and Distribution, Production, Inventory Controlling) in Customizing. This documentation can be found as follows (starting from the top level of the Customizing tree):
+ Logistics - General + Logistics Information System (LIS) + Logistics Data Warehouse + Data Basis + Tools + Setup of Statistical Data + Application-specific Setup of Statistical Data
The programs which carry out the setup in the individual applications are also called at this point in Customizing.
Procedure for statistical data setup
The setup reads all original documents or only the original documents required by the user (such as purchase orders, production orders, material documents, billing documents and so on), and sets up statistical data from these documents. Here, every document is updated in the LIS information structures corresponding to the update rules defined in Customizing.
Does the statistical data setup work correctly?
Since a setup can take a number of hours, it is best to just set up one or two (incorrect) documents at first, to check the result before a complete setup is carried out. This is carried out with the following steps:
1. Call the setup program for a few test documents
Call the statistical data setup for the respective application via Customizing. (Path -> see above) Enter the information structure to be set up, and, for example, '&(T' as a version. Important: the version '&(T' must not yet exist, that is, '&(T' must be empty.
Explanation of the version: All data records of an information structure begin with a three-character version number. The current statistical data (actual data) has the version number 000. Data is selected from this version when a standard analysis is carried out. So that the actual data is not modified, the setup must be carried out into another version. The versions into which a setup is carried out must begin with '&(' - by convention.
Enter the document numbers for which the setup is to be carried out as a test on the setup selection screen. Start the setup after this. (If there are not many documents, this can take place online).
2. Check the result of the setup
To display the set up documents in the standard analysis, the standard analysis must access the version '&T'. Via the menu path "System -> User profile -> User parameters", enter and save the parameter ID 'MCR' with the value 'X'. As a result, an input field appears on the standard analysis selection screen, in which a version other than '000' can be entered.
Now call the standard analysis in question. On the selection screen (at the very bottom), overwrite the version '000' with '&(T' (if you have followed the above suggestions). The standard analysis then displays the data which was set up.
3. The user parameter 'MCR' can be reset again.
Standard procedure for the statistical data setup in the LIS
The data of the actual version (000) should be replaced with the set up data when setting up a complete information structure. Two procedures (A and B) are described below. These procedures have the following advantages and disadvantages:
Advantages of procedure A:
It is possible to compare the old actual data with the set up data.
Disadvantages of procedure A:
Data is stored 1 or 2 times (3 times if the actual data is saved)
R/3 operation must be shut down during the entire procedure
Longer runtime than procedure B
Advantages of procedure B:
Data is only stored 1 or 2 times (3 times if the actual data is saved)
R/3 operation must only be shut down during the actual setup.
The runtime is shorter than for procedure A
Disadvantages of procedure B:
It is not possible to compare the old actual data with the set up data.
Procedure A is the "normal" method", and is suitable for normal datasets. On the other hand, procedure B should be used for large datasets.
Although the correctness of the setup should already have been checked using a few documents - as described above, it can nevertheless be a good idea to compare the entire data with the old data (or at least taking random samples) after carrying out the setup. Only procedure A offers this possibility.
Important additional information on deleting and converting data
Note that when Report RMCVISCP is mentioned in the following section, it should only be used in Releases < 4.X . In all release levels as of 4.X, Report RMCVISCP has been replaced by Report RMCSISCP. So use the later report, since Report RMCVISCP in Release > 4.X is also no longer supported by Support.
Procedure A
R/3 operation must be shut down until the following steps 2 to 5 have all been carried out. In particular, no documents may be created or modified.
1. Delete the target version of the setup
If the setup is to take place in version '&(N', for example, this version must be deleted using the program RMCVISCP. Enter the affected information structures and the version to be deleted on the selection screen of RMCVISCP. (In addition, the checkbox "Delete a version?" must be marked).
The version '000' may not be chosen as a target version of the setup. This means that '000' may not be entered as the version to be deleted.
2. Carry out the setup.
In Customizing (see the path described under "Symptom"), call the setup corresponding to the required application. Since now, or at least a large proportion of the original documents are now to be set up, it is recommended that you schedule the setup in the background for a weekend.
3. Check the data which was set up
The set up data can be displayed via the standard analysis. To do this, set the MRC parameter to 'X' (see above), so that the version '&(N' can be entered on the standard analysis selection screen (if the setup was carried out in this version).
Furthermore, you can compare version '000' (old actual data) with the newly set up data (version '&(N') via the comparison of planned and actual data in the standard analysis. (The menu path for the comparison of planned and actual data in the standard analysis is as follows: Edit -> Comparisons -> Planned/actual... )
4. Delete version '000'
Delete version '000' using program RMCVISCP. If there is sufficient disk space, the version '000' can of course be saved as version '&(S', for example, before carrying out the deletion. (This version must be empty or have been deleted beforehand). However, this saving is not absolutely necessary, especially since the old actual data is presumably incorrect (otherwise there would be no need to make the setup).
5. Copy the set up data into version '000'
Using program RMCVISCP, copy the set up version '&(N' into the target version '000'. R/3 operation can start again after this.
6. Delete version '&(N'
Finally, delete the version '& (N' using the program RMCVISCP.
Procedure B:
Within procedure B, R/3 operation need only be shut down during the following steps 4 and 5.
1. Save the actual data
Copy version '000' into another version, for example, '&(S'. Call program RMCVISCP for this, and enter '000' as the source version and, for example, '&(S' as the target version. The target version must be empty or have been deleted beforehand (with the program RMCVISCP).
However, the actual data need not necessarily be saved. In particular, this is the case if the actual data contains a large number of errors. However, the saved data can be an advantage if the setup is canceled for any reason. The old actual data could then be copied back into version '000' again.
2. Deactivate updating
Deactivate the update in Customizing for the information structures to be set up. The Customizing path for this is as follows:
+ Logistics - General + Logistics Information System (LIS) + Logistics Data Warehouse + Updating + Updating Control + Activate update
Then choose the relevant application. The affected information structures can be selected by double-clicking on the subsequent screen. Then mark the radio button "No updating" under "Updating" on the following popup.
3. Delete version '000' (with program RMCVISCP)
For large dataset, deleting version '000' can take several hours. However, it is important that updating of the information structure is deactivated, and thus R/3 operation can be continued as normal.
4. Activate updating
After deleting version '000', reactivate updating of the affected information structures (as under point 3 above).
5. Carry out the setup
In Customizing, call the setup corresponding to the required application. The setup must be carried out into version '000'.
R/3 operation must be shut down during the setup. Before and after this step, R/3 operation can continue as normal.
6. Check the set up version '000' by taking random samples
A comparison with the old actual data is only possible if the old actual data was saved beforehand (for example, into the version '&(S'). However, the comparison is only useful if R/3 operation was stopped between saving the old actual data and the setup. Otherwise, there exists the danger that documents were created or changed in the meantime. These document changes would then only be taken into account in the set up data, not in the old actual data.
7. Delete version '&(S' with program RMCVISCP
The saved old actual data can than be deleted afterwards.
Reward points if useful.
Regards,
Amrish Purohit
Message was edited by:
AMRISH PUROHIT -
Hi Gurus,
I am in a great need of SIS reports details urgently.Can someone send me the SIS Reports details (document) to me.My email is........ [email protected]
Thanks in advance.
VenkatHi,
Statistic Group:
Purpose To capture data for Standard Reports, we require to activate Statistic Group as under:
--> Item category (Configuration)
--> Sales document type (Configuration)
--> Customer (Maintain in Master data)
--> Material (Maintain in Master data)
When you generate statistics in the logistics information system, the system uses the combination of specified statistics groups to determine the appropriate update sequence. The update sequence in turn determines for exactly which fields the statistics are generated.
Configuration:
IMG --> Logistics Information System (LIS) --> Logistics Data Warehouse --> Updating --> Updating Control --> Settings: Sales --> Statistics Groups -->
1. Maintain Statistics Groups for Customers
2. Maintain Statistics Groups for Material
3. Maintain Statistics Groups for Sales Documents
4. Assign Statistics Groups for Each Sales Document Type
5. Assign Statistics Groups for each Sales Document Item Type .....
All Standard Reports which are available are as under:
SAP Easy Access: Information Systems -> Logistics -> Sales and distribution ->
1. Customer -> Incoming orders / Returns / Sales / Credit memos / Sales activities / Customer master / Conditions / Credit Master Sheet
2. Material -> Incoming orders / Returns / Sales / Credit memos / Material master / ...
3. Sales organization -> Sales organization / Sales office / Sales employee
4. Shipping point -> Deliveries / Returns
5. SD documents -> Orders / Deliveries / Billing documents ...
& so on.
Some of the Standard reports in SD are:
Sales summary - VC/2
Display Customer Hierarchy - VDH2
Display Condition record report - V/I6
Pricing Report - V/LD
Create Net Price List - V_NL
List customer material info - VD59
List of sales order - VA05
List of Billing documents - VF05
Inquiries list - VA15
Quotation List - VA25
Incomplete Sales orders - V.02
Backorders - V.15
Outbound Delivery Monitor - VL06o
Incomplete delivery - V_UC
Customer Returns-Analysis - MC+A
Customer Analysis- Sales - MC+E
Customer Analysis- Cr. Memo - MC+I
Deliveries-Due list - VL04
Billing due list - VF04
Incomplete Billing documents - MCV9
Customer Analysis-Basic List - MCTA
Material Analysis(SIS) - MCTC
Sales org analysis - MCTE
Sales org analysis-Invoiced sales - MC+2
Material Analysis-Incoming orders - MC(E
General- List of Outbound deliveries - VL06f
Material Returns-Analysis - MC+M
Material Analysis- Invoiced Sales - MC+Q
Variant configuration Analysis - MC(B
Sales org analysis-Incoming orders - MC(I
Sales org analysis-Returns - MC+Y
Sales office Analysis- Invoiced Sales - MC-E
Sales office Analysis- Returns - MC-A
Shipping point Analysis - MC(U
Shipping point Analysis-Returns - MC-O
Blocked orders - V.14
Order Within time period - SD01
Duplicate Sales orders in period - SDD1
Display Delivery Changes - VL22
In addition to this here is some tip
Kindly refer to Glynn Williams page no :-418,it says that should one create a new structure and require historical data to be updated into it,refer to OSS note number 0064636.This explains the methods of procedding with statistical update.
Summary
Symptom
This note describes the standard procedure for setting up the statistical data in the Logistics Information System (LIS). However, in addition to this standard procedure, you should also read the application-specific documentation which is stored for each application (Purchasing, Sales and Distribution, Production, Inventory Controlling) in Customizing. This documentation can be found as follows (starting from the top level of the Customizing tree):
+ Logistics - General + Logistics Information System (LIS) + Logistics Data Warehouse + Data Basis + Tools + Setup of Statistical Data + Application-specific Setup of Statistical Data
The programs which carry out the setup in the individual applications are also called at this point in Customizing.
Procedure for statistical data setup
The setup reads all original documents or only the original documents required by the user (such as purchase orders, production orders, material documents, billing documents and so on), and sets up statistical data from these documents. Here, every document is updated in the LIS information structures corresponding to the update rules defined in Customizing.
Does the statistical data setup work correctly?
Since a setup can take a number of hours, it is best to just set up one or two (incorrect) documents at first, to check the result before a complete setup is carried out. This is carried out with the following steps:
1. Call the setup program for a few test documents
Call the statistical data setup for the respective application via Customizing. (Path -> see above) Enter the information structure to be set up, and, for example, '&(T' as a version. Important: the version '&(T' must not yet exist, that is, '&(T' must be empty.
Explanation of the version: All data records of an information structure begin with a three-character version number. The current statistical data (actual data) has the version number 000. Data is selected from this version when a standard analysis is carried out. So that the actual data is not modified, the setup must be carried out into another version. The versions into which a setup is carried out must begin with '&(' - by convention.
Enter the document numbers for which the setup is to be carried out as a test on the setup selection screen. Start the setup after this. (If there are not many documents, this can take place online).
2. Check the result of the setup
To display the set up documents in the standard analysis, the standard analysis must access the version '&T'. Via the menu path "System -> User profile -> User parameters", enter and save the parameter ID 'MCR' with the value 'X'. As a result, an input field appears on the standard analysis selection screen, in which a version other than '000' can be entered.
Now call the standard analysis in question. On the selection screen (at the very bottom), overwrite the version '000' with '&(T' (if you have followed the above suggestions). The standard analysis then displays the data which was set up.
3. The user parameter 'MCR' can be reset again.
Standard procedure for the statistical data setup in the LIS
The data of the actual version (000) should be replaced with the set up data when setting up a complete information structure. Two procedures (A and B) are described below. These procedures have the following advantages and disadvantages:
Advantages of procedure A:
It is possible to compare the old actual data with the set up data.
Disadvantages of procedure A:
Data is stored 1 or 2 times (3 times if the actual data is saved)
R/3 operation must be shut down during the entire procedure
Longer runtime than procedure B
Advantages of procedure B:
Data is only stored 1 or 2 times (3 times if the actual data is saved)
R/3 operation must only be shut down during the actual setup.
The runtime is shorter than for procedure A
Disadvantages of procedure B:
It is not possible to compare the old actual data with the set up data.
Procedure A is the "normal" method", and is suitable for normal datasets. On the other hand, procedure B should be used for large datasets.
Although the correctness of the setup should already have been checked using a few documents - as described above, it can nevertheless be a good idea to compare the entire data with the old data (or at least taking random samples) after carrying out the setup. Only procedure A offers this possibility.
Important additional information on deleting and converting data
Note that when Report RMCVISCP is mentioned in the following section, it should only be used in Releases < 4.X . In all release levels as of 4.X, Report RMCVISCP has been replaced by Report RMCSISCP. So use the later report, since Report RMCVISCP in Release > 4.X is also no longer supported by Support.
Procedure A
R/3 operation must be shut down until the following steps 2 to 5 have all been carried out. In particular, no documents may be created or modified.
1. Delete the target version of the setup
If the setup is to take place in version '&(N', for example, this version must be deleted using the program RMCVISCP. Enter the affected information structures and the version to be deleted on the selection screen of RMCVISCP. (In addition, the checkbox "Delete a version?" must be marked).
The version '000' may not be chosen as a target version of the setup. This means that '000' may not be entered as the version to be deleted.
2. Carry out the setup.
In Customizing (see the path described under "Symptom"), call the setup corresponding to the required application. Since now, or at least a large proportion of the original documents are now to be set up, it is recommended that you schedule the setup in the background for a weekend.
3. Check the data which was set up
The set up data can be displayed via the standard analysis. To do this, set the MRC parameter to 'X' (see above), so that the version '&(N' can be entered on the standard analysis selection screen (if the setup was carried out in this version).
Furthermore, you can compare version '000' (old actual data) with the newly set up data (version '&(N') via the comparison of planned and actual data in the standard analysis. (The menu path for the comparison of planned and actual data in the standard analysis is as follows: Edit -> Comparisons -> Planned/actual... )
4. Delete version '000'
Delete version '000' using program RMCVISCP. If there is sufficient disk space, the version '000' can of course be saved as version '&(S', for example, before carrying out the deletion. (This version must be empty or have been deleted beforehand). However, this saving is not absolutely necessary, especially since the old actual data is presumably incorrect (otherwise there would be no need to make the setup).
5. Copy the set up data into version '000'
Using program RMCVISCP, copy the set up version '&(N' into the target version '000'. R/3 operation can start again after this.
6. Delete version '&(N'
Finally, delete the version '& (N' using the program RMCVISCP.
Procedure B:
Within procedure B, R/3 operation need only be shut down during the following steps 4 and 5.
1. Save the actual data
Copy version '000' into another version, for example, '&(S'. Call program RMCVISCP for this, and enter '000' as the source version and, for example, '&(S' as the target version. The target version must be empty or have been deleted beforehand (with the program RMCVISCP).
However, the actual data need not necessarily be saved. In particular, this is the case if the actual data contains a large number of errors. However, the saved data can be an advantage if the setup is canceled for any reason. The old actual data could then be copied back into version '000' again.
2. Deactivate updating
Deactivate the update in Customizing for the information structures to be set up. The Customizing path for this is as follows:
+ Logistics - General + Logistics Information System (LIS) + Logistics Data Warehouse + Updating + Updating Control + Activate update
Then choose the relevant application. The affected information structures can be selected by double-clicking on the subsequent screen. Then mark the radio button "No updating" under "Updating" on the following popup.
3. Delete version '000' (with program RMCVISCP)
For large dataset, deleting version '000' can take several hours. However, it is important that updating of the information structure is deactivated, and thus R/3 operation can be continued as normal.
4. Activate updating
After deleting version '000', reactivate updating of the affected information structures (as under point 3 above).
5. Carry out the setup
In Customizing, call the setup corresponding to the required application. The setup must be carried out into version '000'.
R/3 operation must be shut down during the setup. Before and after this step, R/3 operation can continue as normal.
6. Check the set up version '000' by taking random samples
A comparison with the old actual data is only possible if the old actual data was saved beforehand (for example, into the version '&(S'). However, the comparison is only useful if R/3 operation was stopped between saving the old actual data and the setup. Otherwise, there exists the danger that documents were created or changed in the meantime. These document changes would then only be taken into account in the set up data, not in the old actual data.
7. Delete version '&(S' with program RMCVISCP
The saved old actual data can than be deleted afterwards.
Reward points if useful.
Regards,
Amrish Purohit -
Remote historical data is not retrieved completely viewing it in MAX4
Hi,
since I installed LabVIEW 8 I have some problems retrieving historical data from another computer. Sometimes not all data is retrieved (if I zoom in or out or move back in time) and this missing data won't be retrieved ever.
I already deleted the Citadel cache once, but after this even less data was retrieved... What's really weird, is, that for channels which weren't retrieved correctly, the data gets not updated anymore!
On the remote computer I have a LabVIEW DSC Runtime 7.1 running (MAX 3.1.1.3003) on my local computer MAX 4.0.0.3010 and LabVIEW 8 DSC (development system) is installed parallel to LV DSC 7.1.1 (dev system). LV 8 is installed for testing purposes (I doubt we'll switch soon) and overall I like MAX 4. The HyperTrend.dll on my local computer is version 3.2.1017.
This is really a quite annoying bug!
So long,
Carsten
Message Edited by cs42 on 02-02-2006 09:18 AMHi,
> We've been unable to reproduce this issue. If you could provide some additional information, it might help us out.
I did fear this, as even on my computer it is happening just sometimes...
> 1) How many traces are you viewing?
The views I observed this in had 2 to 13 traces.
> 2) How often are the traces being updated?
For some it's pretty often (about once a second), for some it's very infrequent (no change in data, that means updated because of max time between logs). I more often see this for traces that are updated very infrequently. But I think I've seen this for frequent traces as well (for these it does work currently).
> 3) Are the traces being updated by a tag value change, or by the "maximum time between logs" setting in the engine?
It happened for both types.
> 4) What is the frequency of the "maximum time between logs" setting?
Max time between logs is 10 minutes.
> 5) Is the Hypertrend running in live mode when you zoom out/pan?
I think it happened in both modes, but it defenitely did in live mode.
> 6) If you disable/re-enable live mode in the Hypertrend, does the data re-appear?
I couldn't trigger the loading of the data. All I did is wait and work with MAX (zooming, panning, looking at data) and after quite a while (some hours), the data appeared.
Just tested this on a view where data is missing (for some days now!), and it didn't trigger data reloading. Zooming and panning don't as well. There's a gap of up to 3 days now for some traces. 7 of the 13 traces of this view are incompletely shown. All stopping at the same time but reappearing at different ones.
AFAIR from the laboratory computer (these are temperatures and it's very plausable that these didn't change), there wasn't any change in these traces so they all got logged because of max time...
I just created a new view and added these traces: the gap is there as well.
(Sorry to put this all in this entry even if it is related to you other questions, but I started this live test with disable/re-enable live mode. )
> 7)
Are the clocks on the client and server computers synchronized? If not
synchronized, how far apart are the times on the two computers?
They should be (Windows 2000 Domain synchronized to ADS), but are 5 seconds apart.
One thing I remember now: I have installed DIAdem 10 beta 2 (10.0.0b2530, USI + DataFinder 1.3.0.2526). There I had (and reported) some problems with data loading from a Citadel Database of a remote machine as well. This was accounted to some cache problem. Maybe a component is interfering?
Thanks for investigating.
Cheers,
Carsten -
How to see the historic data of CAT2
Dear All,
I have a question related to CATS.
Through which table I can get the historical data of an employee which is stored in CAT2 transaction. I tried through CATSDB but not able to get the number of working hours as stored in CAT2.
I clied on a field of CAT2 and checked the technical details. there I found the table name CATD but when I run SE11 and enter this table name, it display it as a sturcture not as a table.
Please provide your help in this regard.
Regards,
-NehaMaximum NUmber of Columns allowed is 1023. So if there are more data than that, I am afraid there is no way to see them all in one screen.
the better thing to do is to use the Settings>Format List>Choose fields option from the selection screen of SE16 to just choose the fields which you want in the output.
It is highly unlikely that you are using all the 100 fields so you can very well hide a few of them with no impact on your output.
As someone else has suggested using a report such as CATSXT_DA will defintiely be a much more useful way of viewing all relevant fields from CATSDB. -
Reloading Historical Data into Cube: 0IC_C03...
Dear All,
I'm working on SAP BW 7.3 and I have five years of historical data being loaded into InfoCube: 0IC_C03 (Material Stocks/Movements (as of 3.0B)) and the deltas are also scheduled (DataSources: 2LIS_03_BF & 2LIS_03_UM) for the InfoCube. I have new business requirement to reload the entire historical data into InfoCube with addition of "Fiscal Quarter" and I do not have a DSO in the data flow. I know its a bit tricky InfoCube (Inventory Management) to work on and reloading the entire historical data can be challenging.
1. How to approach this task, what steps should I take?
2. Drop/Delete entire data from InfoCube and then delete Setup Tables and refill these and then run "Repair Full Load", is this they way?
3. What key points should I keep in mind, as different BO reports are already running and using this InfoCube?
4. Should I run "Repair Full Load" requests for all DataSources( 2LIS_03_BX &2LIS_03_BF & 2LIS_03_UM) ?
I will appreciate any input.
Many thanks!
Tariq AshrafHi Tariq,
Unfortunately, you will need some downtime to execute a stock initialization in a live production system. Otherwise you cannot guarantee the data integrity. There are however certainly ways to minimize it. Please see SAP Note 753654 - How can downtime be reduced for setup table update. You can dramatically reduce the downtime to distinguish between closed and open periods. The closed periods can be updated retrospectively.
To make it more concrete, you could consider e.g. the periods prior to 2014 as "closed". You can then do the initialization starting from January 1, 2014. This will save you a lot of downtime. Closed periods can be uploaded retrospectively and do not require downtime.
Re. the marker update,please have a look at the document Re-initialization of the Material Stocks/Movements cube (0IC_C03) with 2LIS_03_BF, 2LIS_03_BX and 2LIS_03_UM in BW 7.x, step 10, 11, 12 and 13 contain important information.
Re. the steps, it looks OK for me but please double check with the document Re-initialization of the Material Stocks/Movements cube (0IC_C03) with 2LIS_03_BF, 2LIS_03_BX and 2LIS_03_UM in BW 7.x.
Please have a look at the following document for more background information around inventory management scenarios:
How to Handle Inventory Management Scenarios in BW (NW2004)
Last but not least, you might want to have a look at the following SAP Notes:
SAP Note 436393 - Performance improvement for filling the setup tables;
SAP Note 602260 - Procedure for reconstructing data for BW.
Best regards,
Sander -
How to recreate EBS user and keep all his historical data.
Hi all
We have a user that is having an issue seeing any of his scheduled Discoverer reports within the Schedule Manager window of Discoverer Plus; Discoverer Desktop works fine.
The solution for it's to recreate the EBS user. The problem with this is that, if we recreate the EBS user, he will lose all historical data connected to that user, including the results of the scheduled Discoverer reports as well as all of the EBS created/last updated information.
There is a way to recreate an EBS user and preserve the historical references.
ThanksWe have a user that is having an issue seeing any of his scheduled Discoverer reports within the Schedule Manager window of Discoverer Plus; Discoverer Desktop works fine.
The solution for it's to recreate the EBS user. The problem with this is that, if we recreate the EBS user, he will lose all historical data connected to that user, including the results of the scheduled Discoverer reports as well as all of the EBS created/last updated information.Why do you need to recreate the user?
Are you saying you are going to create a new username for the same user and end-date the old one?
There is a way to recreate an EBS user and preserve the historical references.I believe there is no such a way to find all records/tables with the old user_id. Even if you find the list and update them manually, I believe this approach is not supported.
Please log a SR to confirm the same with Oracle support.
Thanks,
Hussein -
HI Experts, I stuck up with one Query. Hope you will help me. I need to extract Historical Data for employees from PA0000 and PA0001
Basically Im reading input file from Flat file using GUI_UPLOAD and storing into internal table t_txt_upload.
Then for all the personal numbers in this internal table need to extract Historical data.
When Im trying to fetch the data using select query its giving Historical. But after READ statement its displaying only a Single record.
can you explain how to fetch Historical records for a personal number.
Can you please guide me. For your clarification attaching the code. Could you please check it and inform me the necessary changes required.
if not t_txt_upload[] is initial. "checking for the data.
sort t_txt_upload by pernr. "sorting uploaded internal table by pernr.
Get the data from table PA0000.
select pernr
begda
endda
massn
from pa0000
into table it_p0000
for all entries in t_txt_upload
where pernr = t_txt_upload-pernr.
if sy-subrc ne 0.
message e063.
endif.
sort it_p0000 by pernr.
**Reading data from PA0001.
select pernr
begda
endda
persk
stell
plans
werks
btrtl
persg
from pa0001
into table it_p0001
for all entries in t_Txt_upload
where pernr = t_txt_upload-pernr.
if sy-subrc ne 0.
message e063.
endif.
sort it_p0001 by pernr.
endif.
Loop at t_txt_upload.
Read table it_p0000 with key pernr = t_txt_upload-pernr binary search.
Read table it_p0001 with key pernr = t_txt_upload-pernr binary search.
if sy-subrc eq 0.
it_final-pernr = t_txt_upload-pernr.
CONCATENATE IT_P0001-BEGDA6(2) IT_P0001-BEGDA4(2)
IT_P0001-BEGDA+0(4) INTO V_BEGIN_DATE
SEPARATED BY '/'.
if sy-subrc eq 0.
it_final-begda = v_begin_date.
endif.
CONCATENATE IT_P0001-ENDDA6(2) IT_P0001-ENDDA4(2)
IT_P0001-ENDDA+0(4) INTO V_END_DATE
SEPARATED BY '/'.
if sy-subrc eq 0.
it_final-endda = v_end_date.
endif.
append it_final.
clear it_p0000.
clear it_p0001.
Endif.
Endloop.Hi user_jedi,
Yes, the data is persisted in the database and is permanent until deleted. The recommended practice is to use the Delete Data Alert Action to periodically purge historical data.
So an overall approach that you could use is:
* Incoming data is written both to BAM for the real time dashboards, and also to a system of record (database, datamart, data warehouse, ...).
* Use a Delete Data Alert Action to periodically purge historical data from BAM.
* If you want to access the historical data from BAM on occasion, you can use an External Data Object. Or use another reporting tool that's optimized for non-real time data.
Regards, Stephen
Maybe you are looking for
-
Hi, I am getting this error from visual studio 2012 whenever i try to create the following project types: - Workflow Custom activity - Web parts The error is as mentioned below "The partial project item type does not have a value for this property" D
-
2010 Mini superdrive has stopped working after Mountain Lion update - anyone else?
After updating to 10.8 the superdrive on my 2010 mini has completely stopped working. It will not accept any disks. Disk Utility sees the drive as a working drive. Anyone else having this issue? If yes and you've resolved it, how??
-
HT4847 I have an iphone and ipad, how do I merge the two icloud space together
I have an iphone and ipad, how do I merge the two icloud space together
-
Why do I lose the quality of my fonts in dreamweaver after slicing in photoshop ?
why do I lose the quality of my fonts in dreamweaver after slicing in photoshop and saving it for web with the highest quality ? Itried everything from different font to highest resolution.
-
Can Peoplesoft HCM 9.2 can be integrated with Oracle HR Analytics?
Hi All, Please help me understand if Peoplesoft HCM 9.2 can be integrated with Oracle HR Analytics? If so what is the latest Version for that. What is the road map for that. Thanks in Advance J Smith