"No Time series found in Live Cache"
Hi All
We are haveing a issue when we are trying to open our Planning books with message No time sereis in Li ve Cache.
Can any suggest
siva
Hi all,
Thanks for all your inputs. sorry could not respond earlier not sure what happened as I was not here during crash , I was only called when the issue came in. WE caontacted SAP and I understand the issue came in as one ofn Basis person ran report /SAPAPO/DM_LC_ANCHORS_DELMANDT (I am not sure if this is the cause). But since data in backup cubes are also not valid we restored teh system ad on date when they thought the data is ok and restored from there.
thanks
Similar Messages
-
Drawback of Time series related to cache
Hi
can any one tell me the drawback of time series related to cache(10.1.3.4.1)
Thanks and Regards
Ananth<p>
Ok first I created one table:
</p>
<p>
CREATE TABLE TEMP1234( <br>
location VARCHAR2(6),<br>
date_start DATE<br>
);<br>
</p>
<p>
ALTER SESSION SET NLS_DATE_FORMAT='YYYY/MM/DD';
</p>
<p>
INSERT into TEMP1234 values ('l1', '2006/10/01');<br>
INSERT into TEMP1234 values ('l1', '2006/10/20');<br>
INSERT into TEMP1234 values ('l1', '2006/11/01');<br>
INSERT into TEMP1234 values ('l2', '2006/11/03');<br>
INSERT into TEMP1234 values ('l2', '2006/11/19');<br>
INSERT into TEMP1234 values ('l1', '2006/11/28');<br>
INSERT into TEMP1234 values ('l1', '2006/12/10');<br>
<br>
COMMIT;
</p>
<p>
Now once it is done, now issue following SQL.
</p>
<p>
SELECT location, date_start, (MAX(date_start) OVER ( order by date_start rows <br>between current row and 1 following)) as date_end<br>
FROM TEMP1234<br>
ORDER BY date_start<br>
</p>
<p>
Result will be as follows:<br>
</p>
<p>
LOCATI DATE_START DATE_END<br>
------ ---------- ----------<br>
l1 2006/10/01 2006/10/20<br>
l1 2006/10/20 2006/11/01<br>
l1 2006/11/01 2006/11/03<br>
l2 2006/11/03 2006/11/19<br>
l2 2006/11/19 2006/11/28<br>
l1 2006/11/28 2006/12/10<br>
l1 2006/12/10 2006/12/10<br>
</p><br>
<p>
7 rows selected.<br>
</p>
<p>
I hope this will solve your problem.
</p>
<p>
Rajs
</p>
<b>www.oraclebrains.com<a>
<br><font color="#FF0000" size="1">POWERED by the people, to the people and for the people WHERE ORACLE IS PASSION.</font></b> -
Hi Friends,
We are facing one strange problem with our Live cache.
We have DP data in Livecache. due to some reasons planning results are corrupted.
So we have deleted time series for DP planning area and recreated time series.
Here what we found is even after deletion and recreation of time series for planning area still
the key figure data exists in live cache and can see in interactive planning book.
Can some one please give some hints what went wrong.
Any other Live cache related programs or jobs need to run to get rid of the problem?.
Regards
KrishHi Thomas,
You are right. after running /SAPAPO/TS_PSTRU_CONTENT_DEL all CVCs for the POS are deleted
and there are 0 CVC left.
but again i have Generated CVCs from Infocube (possibly same old combinations are generated since the infocube is same).
Now i have initialized planning area and then accessed the selection id in interactive planning.
Surprisingly still i can see key figure values in planning book.
Here are steps what i did:
1. Deleted time series for PA (de initialization)
2. Deleted CVCs from POS
3.Deactivated POS and reactivated POS
4.Generated CVCs again from Infocube
5.Created time series for PA (re initialize)
6.Loaded selection id in interactive PB.
7.Still i can see key figure data for loaded CVCs.
Some how the data is not getting cleaned in livecache inspite of doing above steps.
data might be stored superfluously in live cache.
I want to know any additional reports are available to get rid of these kind of issues.
Regards, -
Hi ,
Some times we see errors like "No Live Cache Anchor Found". Can somebody tell me in detail that what is a live cache anchor and why this inconsistency occurs. Is there a link containing detailed documentation?
Best Regards,
Chandan DubeyDear Chandan,
This error message "No Live Cache Anchor Found" states that for one or even for several characteristics combinations (see Transaction /SAPAPO/MC62) a so-called LiveCache anchor does not exist.In this case, a LiveCache anchor is a pointer to one or several time series in the LiveCache.
There is one LiveCache anchor per planning area, characteristics combination (planning object) and model (model belonging to the planning version, see transaction /SAPAPO/MVM). If for one planning area, time series objects were created for several versions with different models, that means several LiveCache anchors exist for the same planning area and for the same characteristics combination.If you created time series objects for several versions of the same model, that means one LiveCache anchor
points to several time series in the LiveCache.
If there is no LiveCache anchor for a planning area, a model and a characteristics combination, this also means that no respective time series exists in the LiveCache and thus, this characteristics combination cannot be used for the planning with this planning area for all versions of the model.If this state occured for a certain characteristics combination, the above-mentioned error message occurs if either exactly this characteristics combination is selected or if a selection contains this characteristics combination.
Possible causes and solutions:
- Time series objects have not yet been created for the selected version.
Solution: Create time series objects (see documentation)
- You are using a planning session with version assignment.For the version that was acutally selected, the time series objects were created, however, this is not the case for the assigned version.
Solution:Create time series objects for the assigned version.
- New characteristics combinations were created without the 'Create time series ob' option (see Transaction /SAPAPO/MC62).
Solution:Execute the report /SAPAPO/TS_LCM_PLOB_DELTA_SYNC for the basis planning object structure of the planning area.This will create the corresponding LiveCache anchors and LiveCache time series for all planning areas that used these basis planning object structure and for all versions of these planning areas for which time series objects already exist.
If none of these possible solutions is successful, you can use report /SAPAPO/TS_LCM_CONS_CHECK to determine and correct the inconsistencies for a planning area ('Repair' option).
I hope this helps.
Regards,
Tibor -
What is the significance of Live Cache in demand planning ?
Hi all,
Can anyone explain me significance of live cache in the demand planning. What are the issues will turn up for live cache if it is not properly maintained?
Thanks
PoojaHi Pooja,
SAP has come up with Live cache concept for storage and most important, quick and efficient processing of transactional data. Its a layer between data base and GUI and even the search methods and storage space has been optimized due to its structure. In DP it is used for storage of time series data whereas in SNP it can store both time series and order series data.
Regarding your second query, it is recommended to have Live cache consistency on a periodic basis for synchrinising data between LC and database tables. You can face many issues due to LC inconsistency as incorrect time series generation, Transactional data discrepancy, COM routine errors during background processing etc.
Let me know if it helps
Regards
Gaurav -
Very large time series database
Hi,
I am planning to use BDB-JE to store time series data.
I plan to store 1 month worth of data in a single record.
My key consists of the following parts: id,year_and_month,day_in_month
My data is an array of 31 doubles (One slot per day)
For example, a data record for May 10, 2008 will be stored as follows
Data Record: item_1, 20080510, 22
Key will be: 1, 200805, 9
data will be: double[31] and 10nth slot will be populated with 22
Expected volume:
6,000,000 records/per day
Usage pattern:
1) Access pattern is random (random ids). May be per id, I have to
retrieve multiple records depending on how much history I need to
retrieve
2) Updates happen simultaneously
3) Wrt ACID properties, only durability is important
(data overwrites are very rare)
I built a few prototypes using BDB-JE and BDB versions. As per my estimates,
with the data I have currently, my database size will be 300GB and the growth
rate will be 4GB per month. This is huge database and access pattern is random.
In order to scale, I plan to distribute the data to multiple nodes (the database on
each node will have certain range of ids) and process each request in parallel.
However, I have to live with only 1GB RAM for every 20GB BDB-JE database.
I have a few questions:
1) Since the data cannot fit in memory, and I am looking for ~5ms response time,
is BDB/BDB-JE right solution?
2) I read about the architectural differences between BDB-JE and BDB
(Log based Vs Page based). Which is better fit for this kind of app?
3) Besides distributing the data to multiple nodes and do parallel processing,
is there anything I can do to improve throughput & scalability?
4) When do you plan to release Replication API for BDB-JE?
Thanks in advance,
SashiSashi,
Thanks for taking the time to sketch out your application. It's still
hard to provide concise answers to your questions though, because so much is
specific to each application, and there can be so many factors.
1) Since the data cannot fit in memory, and I am looking for ~5ms
response time, is BDB/BDB-JE right solution?
2) I read about the architectural differences between BDB-JE and BDB
(Log based Vs Page based). Which is better fit for this kind of app?There are certainly applications based on BDB-JE and BDB that have
very stringent response times requirements. The BDB products try to
have lower overhead and are often good matches for applications that
need good response time. But in the end, you have do some experimentation
and some estimation to translate your platform capabilities and
application access pattern into a guess of what you might end up seeing.
For example, it sounds like a typical request might require multiple
reads and then a write operation. It sounds like you expect all these
accesses to incur I/O. As a rule of thumb,
you can think of a typical disk seek as being on the order of 10 ms, so to
have a response time of around 5ms, your data accesses need to be mainly
cached.
That doesn't mean your whole data set has to fit in memory, it means
your working set has to mostly fit. In the end, most application access
isn't purely random either, and there is some kind of working set.
BDB-C has better key-based locality of data on disk, and stores data
more compactly on disk and in memory. Whether that helps your
application depends on how much locality of reference you have in the
app -- perhaps the multiple database operations you're making per
request are clustered by key. BDB-JE usually has better concurrency
and better write performance. How much that impacts your application
is a function of what degree of data collision you see.
For both products, some general principles, such as reducing the size
of your key as much as possible will help. For BDB-JE, you also need
to consider options like experimenting with setting je.evictor.lruOnly to
false may give better performance. Also for JE, tuning garbage collection
to use a concurrent low pause collector can provide smoother response times.
But that's all secondary to what you could do in the application, which
is to make the cache as efficient as possible by reducing the size of the
record and clustering accesses as much as possible.
>
4) When do you plan to release Replication API for
r BDB-JE?Sorry, Oracle is very firm about not announcing release estimates.
Linda -
Hi, I've got the unenviable task of rewriting the data storage back end for a very complex legacy system which analyses time series data for a range of different data sets. What I want to do is bring this data kicking an screaming into the 21st century but putting it into a database. While I have worked with databases for many years I've never really had to put large amounts of data into one and certainly never had to make sure I can get large chunks of that that data very quickly.
The data is shaped like this: multiple data sets (about 10 normally) each with up to 100k rows with each row containing up to 300 data points (grand total of about 300,000,000 data points). In each data set all rows contain the same number of points but not all data sets will contain the same number of points as each other. I will typically need to access a whole data set at a time but I need to be able to address individual points (or at least rows) as well.
My current thinking is that storing each data point separately, while great from a access point of view, probably isn't practical from a speed point of view. Combined with the fact that most operations are performed on a whole row at a time I think row based storage is probably the best option.
Of the row based storage solutions I think I have two options: multiple columns and array based. I'm favouring a single column holding an array of data points as it fits well with the requirement that different data sets can have different numbers of points. If I have separate columns I'm probably into multiple tables for the data and dynamic table / column creation.
To make sure this solution is fast I was thinking of using hibernate with caching turned on. Alternatively I've used JBoss Cache with great results in the past.
Does this sound like a solution that will fly? Have I missed anything obvious? I'm hoping someone might help me check over my thinking before I commit serious amounts of time to this...Hi,
Time Series Key Figure:
Basically Time series key figure is used in Demand planning only. Whenever you cerated a key figure & add it to DP planning area then it is automatically convert it in to time series key figure. Whenever you actiavte the planning area that means you activate each Key figure of planning area with time series planning version.
There is one more type of Key figure & i.e. an order series key figure & which mainly used in to SNP planning area.
Storage Bucket profile:
SBP is used to create a space in to live cache for the periodicity like from 2003 to 2010 etc. Whenever you create SBP then it will occupy space in the live cache for the respective periodicity & which we can use to planning area to store the data. So storage bucket is used for storing the data of planning area.
Time/Planning bucket profile:
basically TBP is used to define periodicity in to the data view. If you want to see the data view in the year, Monthly, Weekly & daily bucket that you have to define in to TBP.
Hope this will help you.
Regards
Sujay -
How can I retrieve data from live cache?This is in Demand Planning : SCM APO.
Please suggest ways.
Thanks & Regards,
SavithaHi,
some time ago I worked on SAP APO.
To read live cache, you first have to open a SIM session.
You can do this as shown in this function module:
FUNCTION ZS_SIMSESSION_GET.
*"*"Local Interface:
*" IMPORTING
*" REFERENCE(IV_SIMID) TYPE /SAPAPO/VRSIOID
*" EXPORTING
*" REFERENCE(EV_SIMSESSION) TYPE /SAPAPO/OM_SIMSESSION
CONSTANTS:
lc_simsession_new TYPE c LENGTH 1 VALUE 'N'.
DATA:
lt_rc TYPE /sapapo/om_lc_rc_tab,
lv_simsession LIKE ev_simsession.
IF NOT ev_simsession IS INITIAL.
EXIT.
ENDIF.
*--> create Simsession
CALL FUNCTION 'GUID_CREATE'
IMPORTING
ev_guid_22 = lv_simsession.
*--> create transactional simulation
CALL FUNCTION '/SAPAPO/TSIM_SIMULATION_CONTRL'
EXPORTING
iv_simversion = iv_simid
iv_simsession = lv_simsession
iv_simsession_method = lc_simsession_new
iv_perform_commit = space
IMPORTING
et_rc = lt_rc
EXCEPTIONS
lc_connect_failed = 1
lc_com_error = 2
lc_appl_error = 3
multi_tasim_registration = 4.
IF sy-subrc > 0.
CLEAR ev_simsession.
* error can be found in lt_rc
ENDIF.
* return simsession
ev_simsession = lv_simsession.
ENDFUNCTION.
Then you can access the live cache.
In this case we read an order (if I rememver correctly, it's a plan order):
DATA:
lv_vrsioid TYPE /sapapo/vrsioid,
lv_simsession TYPE /sapapo/om_simsession.
* Get vrsioid
CALL FUNCTION '/SAPAPO/DM_VRSIOEX_GET_VRSIOID'
EXPORTING
i_vrsioex_fld = '000' "By default
IMPORTING
e_vrsioid_fld = lv_vrsioid
EXCEPTIONS
not_found = 1
OTHERS = 2.
CALL FUNCTION 'ZS_SIMSESSION_GET'
EXPORTING
iv_simid = iv_vrsioid
IMPORTING
ev_simsession = lv_simsession.
CALL FUNCTION '/SAPAPO/RRP_LC_ORDER_GET_DATA'
EXPORTING
iv_order = iv_orderid
iv_simversion = iv_vrsioid
IMPORTING
et_outputs = lt_outputs
et_inputs = lt_inputs.
If you change something in your simsession, you have to merge it back afterwards, so that your changes become effective.
You can do this like that:
* Merge simulation version (to commit order changes)
CALL FUNCTION '/SAPAPO/TSIM_SIMULATION_CONTRL'
EXPORTING
iv_simversion = lv_vrsioid
iv_simsession = lv_simsession
iv_simsession_method = 'M'
EXCEPTIONS
lc_connect_failed = 1
lc_com_error = 2
lc_appl_error = 3
multi_tasim_registration = 4
target_deleted_saveas_failed = 5
OTHERS = 6.
I hope this helps... -
Hello
Got some queries please. I tried to search the forum but not able to find the solution.
1) What is the concept of time series in live cache. I have read the documentation and came to know that there r 3 types - Time Series , Order cache and ATP order cache.
2) What happens when we select the planning area and use the transaction 'Create time series' ?
Thank you.
Regards
KKHi
In laymanu2019s language APO we different dimensions-. We can have product and location and we require to attach the time dimension which will show the data in live chache with particular time.
Order series KF are mainly used for APO SNP. In which al the data get stored based on order categories for example Forecast KF will be stored as FA and FC order series category.
When you select the planning area and create time series, it attach the respective time horizon for the planning area.
Thanks
Amol -
Hello,
I want to check the data in my time series live cache in my planning area. What is the transaction to check ?
Thank you
SteveHello Steve,
Here are my answers:
For Q1: No, I don't think it's because you are in the 10th month of the year. The package size (i.e. the number of rows in each package) and the number of packets depend on a few factors: a) how much data is in your planning area b) on whether you implemented BADI /SAPAPO/SDP_EXTRACT c) the parameters that you placed in the "data records/calls" and "display extr. calls" fields.
For Q2: It is included because key figures with units/currencies (e.g. amounts and currencies) do need UOM/BUOM/Currency information and that's why it is also part of the output. You can check what unit characteristic a certain KF uses in transaction RSD1.
For Q3: Yes, you can but you need to do more than what I mentioned before. Here are some ways to do that:
A) Generate an export datasource. If you are in SCM < 5.0, connect that to an InfoSource and then to a cube. If you are in SCM 5.0, connect that to an InfoCube using a transformation rule. You can then load data from the planning area to the InfoCube. After that, you can then use transaction /SAPAPO/RTSCUBE to load data from the cube to the PA.
B) You can opt to create a custom ABAP that reads data from the DataSource, performs some processing and then write the data to target planning area using function module /SAPAPO/TS_DM_SET or the planning book BAPI.
Hope this helps. -
Hi,
Can some1 explain me how the SNP data is stored in the live cache.
I know it's by series of order. But tell me is there any mapping involved in it?
Thanks,
Siva.Order series data is used by both SNP and PPDS.
While I do not know the exact way data is stored in liveCache the following helps me to explain and understand.
Each record in liveCache for Order Series data is stored as
Location Product combination-ATP Category-Quantity-Time Stamp.
Unlike Timeseries data CVC is replaced by a location product combination only, keyfigure is replaced by ATP Category (each kind of Order elements has its own ATP Category) and Time Bucket is replaced by actual Time Stamp (in UTC ddmmyyyyhhmmss format).
The mapping is essentially with ATP Category which maps to corresponding MRP elements in ECC or R/3.
In the SNP Planning Area for each keyfigure the Category Group defines the grouping of ATP Categories for which orderseries data will be displayed in that particular keyfigure.
Hope this helps.
Thanks,
Somnath -
hello expert,
I want to install a live cache server on a separate host, but failed more times, and got a same error messge every time as below,
Some database applications are probably still running.Check the log file sdbinst.log.
and the sdbinst.log file information :
starting preparing phase of package Base 7.6.00.29 64 bit
file C:/Windows/SysWOW64/drivers/etc/services not found
installation exited abnormally at Tu, Dec 29, 2009 at 09:23:36
I install live cache server on windows 2008 64bit.
how to slove this issue? can someone help me out? thanks in advance.Sloved.
-
What are the possible live cache issues turn up for demand planning?
HI all,
Can anyone tell me the issues that may arise iin live cache and how to ensure the performance and maintenace of live cache that a planning area is utilizing it.
if the utilization is high how to bring it down!!
Can anyone pls guide me on this!!!
Thanks
PoojaHi Pooja,
1) Accumulation of logs created during demand planning jobs will have an impact on performance
and affect livecache and hence to be cleared periodically
2) Livecache consistency check to be performance at planned intervals
which will reduce livecache issues
3) Through om13 transaction, you can analyse the livecache and LCA objects
and act accordingly
4) You can also carry out /SAPAPO/PSTRUCONS - Consistency Check for Planning Object
Structure and /SAPAPO/TSCONS - Consistency Check for Time Series Network related to
demand planning to avoid livecache issues later
Regards
R. Senthil Mareeswaran. -
Hello Guru's,
My Production system Live cache has been crashed. I am trying to restore the last backup which was taken yesterday night 8:00pm. I am able to restore the Live Cache however unable to restore the logs to recover PIT, as it is complaining about log sequence. I am not sure about finding the sequence number it is asking for.
Crash happened today morning 7:00am. Backup restored last night 8pm and trying to do point in time recovery. Backup has been restored and trying to apply logs.
Note;- log backup is for every 2 hours, so after the complete backup last night, we have log backup's for every 2 hours and trying to apply the same.
Kindly suggest what can be done..
Thanks,
Sravanthi.Hello,
1.
You have questions how to solve the liveCache issues on the PROD system
=>
As already recommended Lars, please open the SAP message to the component BC-DB-LVC to get SAP support.
It will be helpful to know the SCM/APO version, liveCache version and more details about the liveCache crash.
2.
In case you had the liveCache crash due the hardware problems, and the data or log volumes or liveCache configuration files are corrupted
=>
solve the hardware issues first or use another server
Follow the SAP note to create the liveCache instance:
457425 Homogenous liveCache copy using backup/restore
You need to create/initialize the liveCache instance for restore or run the Restore with initialization, restore the successful databackup, then leave the liveCache in admin status and continue the restore of the incremental databackup/logbackups in sequence if needed.
*More information could be found in the SAP notes:
1377148 FAQ: SAP MaxDB backup/recovery
869267 FAQ: SAP MaxDB LOG area
820824 FAQ: SAP MaxDB/liveCache technology
Thank you and best regards, Natalia Khlopina -
SCM5.0 Live Cache issue
Hello,
I am a newer from SCM5.0. I have 2 questions regarding live cache for SCM5.0.
1) if I want to run SNP or PP/DS, the live cache is necessary. that means, we MUST install live cache for APO planning run.
2) From now on, no 32 bit live cache version for SCM5.0. I can just install 64 bit live cache version for SCM5.0. right?
Could anyone give me some comments on the 2 issue? thanks in advance.
Best Regards,
KarlHi Karl,
1. In APO whatever runs you take there should be liveCache. The data will be refreshed from database to livecache and is accessed.
In APO data is stored either order series or time series.
2. Yes you have to install 64 bit.
Regards,
Kishore Reddy.
Maybe you are looking for
-
I click on a website, it loads, and then immediately goes to a blank page
This problem just started about 2 weeks ago. I will click on a site, and for a second i am there, then the page goes blank. sometimes I get the word "Done" in the lower left-hand corner of the page. This appears to be random w/pages. Some, never have
-
Hello all, I am planning to order a new MacPro (2.66GHz Quad-Core Intel Xeon, 6GB Ram, 1TB HD, ATI Radeon HD 4870 512MB) and intend to use my old 20" Cinema Display with its iSight. Ideally I would like to get a HD resolution 24" display too (for fam
-
Another user modify table error on Service call
Dear All, i am getting error while updating the service call. As this is SAP2007B PL 16 i have created a service call when i am updating this service call i am getting the error like-another user is modify table-service call . Kindly help me out. Reg
-
Sample rate (X_Value) is wrong
On one particular PC i recorded: X_Value 0 1.000122 2.000243 3.000365 4.000486 5.000608 6.00073 7.000851 8.000973 on every other PC i recorded: X_Value 0 1 2 3 4 5 6 7 8 Just have a simple timed while(1000ms) loop around a "Write To Measurement File"
-
Mistakes with Ovi Maps - how often are they update...
I've noticed a few subtle mistakes with Ovi Maps driving around - for instance a street near me is actually blocked off at one end but the navigator tries to get you to turn into it, and another where it directs you to turn right but no right turn is