Query problem witch cache
hello experts,
i have following problem.
i called a query first. the result is ok. i called it twice, the result isn't ok. When i delete all cache types in RSRCACHE and then i call the query again, all is ok.
very strange, but i hope sameone can help me.
your Andreas Täubrich
Hi,
Hmm, this is a bug.
Is something is changed in the BW system? Like currency convertion - have you new courses.
then the cache should be inactive.
open a oss message.
deactivate the cache for this query. So that, your users get the right result.
How is the performance after that, do you need the cache?
Sven
Similar Messages
-
Hello APEX people,
I posted my problem here:
Designing LOV Query Problem
What I have is a sequence like this:
CREATE SEQUENCE
DR_SEQ_FIRST_SCHEDULE_GROUP
MINVALUE 1 MAXVALUE 7 INCREMENT BY 1 START WITH 1
CACHE 6 ORDER CYCLE ;
What I need would be a SQL query returning all possible values oft my sequence like:
1
2
3
4
5
6
7
I want to use it as a source for a LOV...
The reason why I use the cycling sequence is: My app uses it to cycle scheduling priorities every month to groups identified by this number (1-7).
In the Admin Form, I want to restrict the assignment in a user friendly way - a LOV.
Thanks
JohannHere ist the solution (posted by michales in the PL/SQL forum):
SQL> CREATE SEQUENCE
dr_seq_first_schedule_group
MINVALUE 1 MAXVALUE 7 INCREMENT BY 1 START WITH 1
CACHE 6 ORDER CYCLE
Sequence created.
SQL> SELECT LEVEL sn
FROM DUAL
CONNECT BY LEVEL <= (SELECT max_value
FROM user_sequences
WHERE sequence_name = 'DR_SEQ_FIRST_SCHEDULE_GROUP')
SN
1
2
3
4
5
6
7
7 rows selected. -
Viewing Excel Files using Tomcat - Problem with caching
Hi all,
A small part of an application I'm writing has links to Excel files for users to view/download. I'm currently using Tomcat v5 as the web/app server and have some very simple code (an example is shown below) which calls the excel file.
<%@ page contentType = "application/vnd.ms-excel" %>
<%
response.setHeader("Pragma", "no-cache");
response.setHeader("Cache-Control", "no-cache");
response.setDateHeader("Expires", 0);
response.sendRedirect("file1.xls");
%>
This all works except but I'm having one big problem.
The xls file (file1.xls) is updated via a share on the server so each month, the xls file is overwritten with the same name but with different contents. I'm finding that when an update is made to the xls file and the user then attempts to view the new file in the browser they recieve only the old xls file. It's caching the xls file and I don't want it to. How can I fix this so that it automatically gives the user the new updated file.
The only way I've managed to get Tomcat to do this is to delete the work directory and delete the file from my IE temp folder and then restart Tomcat - this is a bit much!
Any help would be greatly appreciated.
Thanks.I'd a problem with caching a few years back, for a servlet request which returned an SVG file.
As a workaround, I ended up putting appending "#" and a timestamp / random number after it. The browser assuming each request was new, and didn't use the cache.
Eg.
http://myserver/returnSVG.do#1234567
where 1234567 is a timestamp / random.
Not sure whether you can do this on a file based URL... but maybe worth a shot...
regards,
Owen -
Problems witch channels to the FICON SAN switches
We have some problems witch channels to the FICON SAN switches: channel 56 on Mainframe1 to switch 5B5 (FDBELF5C1_VSAN500_DISK) and channel 28 (on Mainframe1) to switch 5B7 (FDBELF5C1_VSAN510_TAPE). Forr Mainframe2 the channels are 35 to 5B5 and 37 to 5B7.
Hereby an example for switch 5B5 and channels 56 (BAD) and 43 (GOOD):
---> command: D M=DEV(5B5)
IEE174I 10.21.40 DISPLAY M 494
DEVICE 05B5 STATUS=ONLINE
CHP 56 43
ENTRY LINK ADDRESS 6543 655B
DEST LINK ADDRESS 75FE 75FE
PATH ONLINE
N YCHP PHYSICALLY ONLINE Y Y
PATH OPERATIONAL Y Y
MANAGED N N
CU NUMBER 05B5 05B5
MAXIMUM MANAGED CHPID(S) ALLOWED: 0
DESTINATION CU LOGICAL ADDRESS = 00
SCP CU ND = 0MDS9K.513.CSC.1F.00059B7ACEC4.00E0
SCP TOKEN NED = 0MDS9K.513.CSC.1F.00059B7ACEC4.0000
SCP DEVICE NED = 0MDS9K.513.CSC.1F.00059B7ACEC4.0000
When one tries to put the path online then the following error messages are seen:
---> command: VARY PATH(5B5,56),ONLINE
IEE386I PATH(05B5,56) NOT BROUGHT ONLINE
IEE763I NAME= IOSCCMSG CODE= 0000000800000032
IOS291I CONFIGURATION DATA COULD NOT BE READ ON PATH(05B5,56) RC=32
DEVICE SUPPORT CODE DETECTED INCORRECT CDR DATA
IEE763I NAME= IECVIOPM CODE= 0000000400000000
IOS554I CONFIGURATION DATA PROCESSING FAILED
IEE764I END OF IEE386I RELATED MESSAGES µ
We don't think that the problem is port related but switch related!
For the moment problems are present on only one of our 4 mainframe SAN directors. Firmware level of the directors was changed 3 weeks ago to NX-OS 4.2(7b).Problem solved after IODF weekend.
Channels were toggled.
Rgds,
Ivy -
Querying the toplink cache under high-load
We've had some interesting experiences with "querying" the TopLink Cache lately.
It was recently discovered that our "read a single object" method was incorrectly
setting query.checkCacheThenDB() for all ReadObjectQueries. This was brought to light
when we upgraded our production servers from 4 cores to 8. We immediatly started
experiencing very long response times under load.
We traced this down to the following stack: (TopLink version 10.1.3.1.0)
at java.lang.Object.wait(Native Method)
- waiting on <0x00002aab08fd26d8> (a oracle.toplink.internal.helper.ConcurrencyManager)
at java.lang.Object.wait(Object.java:474)
at oracle.toplink.internal.helper.ConcurrencyManager.acquireReadLock(ConcurrencyManager.java:179)
- locked <0x00002aab08fd26d8> (a oracle.toplink.internal.helper.ConcurrencyManager)
at oracle.toplink.internal.helper.ConcurrencyManager.checkReadLock(ConcurrencyManager.java:167)
at oracle.toplink.internal.identitymaps.CacheKey.checkReadLock(CacheKey.java:122)
at oracle.toplink.internal.identitymaps.IdentityMapKeyEnumeration.nextElement(IdentityMapKeyEnumeration.java:31)
at oracle.toplink.internal.identitymaps.IdentityMapManager.getFromIdentityMap(IdentityMapManager.java:530)
at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.checkCacheForObject(ExpressionQueryMechanism.java:412)
at oracle.toplink.queryframework.ReadObjectQuery.checkEarlyReturnImpl(ReadObjectQuery.java:223)
at oracle.toplink.queryframework.ObjectLevelReadQuery.checkEarlyReturn(ObjectLevelReadQuery.java:504)
at oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:564)
at oracle.toplink.queryframework.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:779)
at oracle.toplink.queryframework.ReadObjectQuery.execute(ReadObjectQuery.java:388)
We moved the query back to the default, query.checkByPrimaryKey() and this issue went away.
The bottleneck seemed to stem from the read lock on the CacheKey from IdenityMapKeyEnumeration
public Object nextElement() {
if (this.nextKey == null) {
throw new NoSuchElementException("IdentityMapKeyEnumeration nextElement");
// CR#... Must check the read lock to avoid
// returning half built objects.
this.nextKey.checkReadLock();
return this.nextKey;
We had many threads that were having contention while searching the cache for a particular query.
From the stack we know that the contention was limited to one class. We've since refactored that code
not to use a query in that code path.
Question:
Armed with this better knowledge of how TopLink queries the cache, we do have a few objects that we
frequently read by something other than the primary key. A natural key, but not the oid.
We have some other caching mechanisms in place (JBoss TreeCache) to help eliminate queries to the DB
for these objects. But the TreeCache also tries to acquire a read lock when accessing the cache.
Presumably a read lock over the network to the cluster.
Is there anything that can be done about the read lock on CacheKey when querying the cache in a high load
situation?CheckCacheThenDatabase will check the entire cache for a match using a linear search. This can be efficient if the cache is very large. Typically it is more efficient to access the database if your cache is large and the field you are querying in the database is indexed in the table.
The cache concurrency was greatly improved in TopLink 11g/EclipseLink, so you may wish to try it out.
Supporting indexes in the TopLink/EclipseLink cache is something desirable (feel free to log the enhancement request on EclipseLink). You can simulate this to some degree using a named query and a query cache.
-- James : http://www.eclipselink.org -
Hi,
I am running a MDX query and when I checked in profiler its showing a long list of Query dimension (Event Class) 1-Cache data, What does it mean?
I think its not hitting storage engine rather pulling from cache but why so much caching. What does this event class means?
Please help!Hi Pinu123,
Create Cache for Analysis Services (AS) was introduced in SP2 of SQL Server 2005. It can be used to make one or more queries run faster by populating the OLAP storage engine cache first. The query results were cached in memory for re-use.
In your scenario, you said that the results not hitting storage engine rather pulling from cache. In this case, it seems that this results had been queried by other users and cached in memory. For more information about cache data, please refer to the links
below.
How to warm up the Analysis Services data cache using Create Cache statement
Examining MDX Query performance using Block Computation
Regards,
Charlie Liao
TechNet Community Support -
SQL+-MULTI TABLE QUERY PROBLEM
HAI ALL,
ANY SUGGESTION PLEASE?
SUB: SQL+-MULTI TABLE QUERY PROBLEM
SQL+ QUERY GIVEN:
SELECT PATIENT_NUM, PATIENT_NAME, HMTLY_TEST_NAME, HMTLY_RBC_VALUE,
HMTLY_RBC_NORMAL_VALUE, DLC_TEST_NAME, DLC_POLYMORPHS_VALUE,
DLC_POLYMORPHS_NORMAL_VALUE FROM PATIENTS_MASTER1, HAEMATOLOGY1,
DIFFERENTIAL_LEUCOCYTE_COUNT1
WHERE PATIENT_NUM = HMTLY_PATIENT_NUM AND PATIENT_NUM = DLC_PATIENT_NUM AND PATIENT_NUM
= &PATIENT_NUM;
RESULT GOT:
&PATIENT_NUM =1
no rows selected
&PATIENT_NUM=2
no rows selected
&PATIENT_NUM=3
PATIENT_NUM 3
PATIENT_NAME KKKK
HMTLY_TEST_NAME HAEMATOLOGY
HMTLY_RBC_VALUE 4
HMTLY_RBC_NORMAL 4.6-6.0
DLC_TEST_NAME DIFFERENTIAL LEUCOCYTE COUNT
DLC_POLYMORPHS_VALUE 60
DLC_POLYMORPHS_NORMAL_VALUE 40-65
ACTUAL WILL BE:
&PATIENT_NUM=1
PATIENT_NUM 1
PATIENT_NAME BBBB
HMTLY_TEST_NAME HAEMATOLOGY
HMTLY_RBC_VALUE 5
HMTLY_RBC_NORMAL 4.6-6.0
&PATIENT_NUM=2
PATIENT_NUM 2
PATIENT_NAME GGGG
DLC_TEST_NAME DIFFERENTIAL LEUCOCYTE COUNT
DLC_POLYMORPHS_VALUE 42
DLC_POLYMORPHS_NORMAL_VALUE 40-65
&PATIENT_NUM=3
PATIENT_NUM 3
PATIENT_NAME KKKK
HMTLY_TEST_NAME HAEMATOLOGY
HMTLY_RBC_VALUE 4
HMTLY_RBC_NORMAL 4.6-6.0
DLC_TEST_NAME DIFFERENTIAL LEUCOCYTE COUNT
DLC_POLYMORPHS_VALUE 60
DLC_POLYMORPHS_NORMAL_VALUE 40-65
4 TABLES FOR CLINICAL LAB FOR INPUT DATA AND GET REPORT ONLY FOR TESTS MADE FOR PARTICULAR
PATIENT.
TABLE1:PATIENTS_MASTER1
COLUMNS:PATIENT_NUM, PATIENT_NAME,
VALUES:
PATIENT_NUM
1
2
3
4
PATIENT_NAME
BBBB
GGGG
KKKK
PPPP
TABLE2:TESTS_MASTER1
COLUMNS:TEST_NUM, TEST_NAME
VALUES:
TEST_NUM
1
2
TEST_NAME
HAEMATOLOGY
DIFFERENTIAL LEUCOCYTE COUNT
TABLE3:HAEMATOLOGY1
COLUMNS:
HMTLY_NUM,HMTLY_PATIENT_NUM,HMTLY_TEST_NAME,HMTLY_RBC_VALUE,HMTLY_RBC_NORMAL_VALUE
VALUES:
HMTLY_NUM
1
2
HMTLY_PATIENT_NUM
1
3
MTLY_TEST_NAME
HAEMATOLOGY
HAEMATOLOGY
HMTLY_RBC_VALUE
5
4
HMTLY_RBC_NORMAL_VALUE
4.6-6.0
4.6-6.0
TABLE4:DIFFERENTIAL_LEUCOCYTE_COUNT1
COLUMNS:DLC_NUM,DLC_PATIENT_NUM,DLC_TEST_NAME,DLC_POLYMORPHS_VALUE,DLC_POLYMORPHS_
NORMAL_VALUE,
VALUES:
DLC_NUM
1
2
DLC_PATIENT_NUM
2
3
DLC_TEST_NAME
DIFFERENTIAL LEUCOCYTE COUNT
DIFFERENTIAL LEUCOCYTE COUNT
DLC_POLYMORPHS_VALUE
42
60
DLC_POLYMORPHS_NORMAL_VALUE
40-65
40-65
THANKS
RCS
E-MAIL:[email protected]
--------I think you want an OUTER JOIN
SELECT PATIENT_NUM, PATIENT_NAME, HMTLY_TEST_NAME, HMTLY_RBC_VALUE,
HMTLY_RBC_NORMAL_VALUE, DLC_TEST_NAME, DLC_POLYMORPHS_VALUE,
DLC_POLYMORPHS_NORMAL_VALUE
FROM PATIENTS_MASTER1, HAEMATOLOGY1, DIFFERENTIAL_LEUCOCYTE_COUNT1
WHERE PATIENT_NUM = HMTLY_PATIENT_NUM (+)
AND PATIENT_NUM = DLC_PATIENT_NUM (+)
AND PATIENT_NUM = &PATIENT_NUM;Edited by: shoblock on Nov 5, 2008 12:17 PM
outer join marks became stupid emoticons or something. attempting to fix -
i want to ask how to keep a query in buffer cache always?
a query which runs frequently in a database will always stay in buffer cache due to LRU. LRU will not allow that query to flush out form buffer cache as it is running frequently.but if i want to place a query explicitly in buffer cache what should i do?please answer it soon..i have recently completed the ORACLE DBA courseware and in an interview the interviewer asked me that if he want to place a query explicitly in buffer cache what will i do?
as a fresher my knowledge is limited and i am trying to improve it continuously..
while the interviewer was a senior oracle DBA, according to my knowledge i told him that we can keep objects like tables indexes in keep cache by declaring so or if the same query is running continuously it will automatically remain in LRU .putting a query is a new stuff i was searching for it but i wasn't getting answer for it so finally i decided to put this question on forum...
i am trying hard to get a job as a DBA but all in vain i am nt getting one really it's becoming frustrating for me i have great hopes on oracle and besides hope i like learning oracle from beginning of my graduation thats y i decided to make my career in oracle and specially a DBA...
now let's see what happens.... -
Problems with cache.clear()
Hello!
We are having some problems with cache clears in our production cluster that we do once a day. Sometimes heaps "explode" with a lot of outgoing messages when we do a cache.clear() and the entire cluster starts failing.
We had some success with a alternate method of doing the cache clear where we iterate cache.keySet() and do a cache.remove(key) with a pausetime of 100 ms after 20000 objects until the cache is empty. But today nodes started failing on a cache.size() before the removes started (the first thing we do is to log the size of the cache we are about to clear before the remove operations start).
We have multiple distributed caches configured with a near cache. The nearcache has 10k objects as high units and the back caches vary in size, the largest is around 300k / node.
In total the DistributedCache-service is handling ~20 caches.
The cluster consists of 18 storage enabled nodes spread across 6 servers and 31 non storage enabled nodes running on 31 servers.
The invalidation stategy on the near caches is ALL (or, its AUTO but it looks like it selects ALL since ListenerFilterCount=29 and ListenerKeyCount=0 on a StorageManager?)
Parition count is 257, backup count 1, no changes in thread count on the service, service is DistributedCache.
Coherence version 3.6.1.0.
A udp test sending from one node to another displays 60 megabyte/s.
Heapsize for the Coherence JVMs, 3gb. LargePages is used.
Heapsize for the front nodes JVMs, 6gb. LargePages is used.
No long GC-times (until the heaps explode), 0.2-0.6 seconds. CMS-collector.
JDK 1.6 u21 64-bit.
Windows 2k8R2.
We are also running CoherenceWeb and some Read/Write-caches, but on different coherence services. We are not doing any clear/size-operations against caches owed by these services.
Looking at some metrics from the last time we had this problem (where we crashed on cache.size()).
The number of messages sent by the backing nodes went from <100/s to 20k-50k/s in 15 s.
The number of messages resent by the backing nodes went from ~0/s to 1k-50k/s depending on the node in 15 s.
At the time the total number of requests against the DistributedCache-service was around 6-8/s and node.
To my questions, should it be a problem to do a cache clear with this setup (where the largest cache is around 5.4 million entires) ? Should it be a problem to do a cache.size()?
What is the nicest way to do a cache.clear()? Any other strategy?
Could a lock on a entry in the cache cause this problem? Should it really cause a problem with cache.size()?
Any advice?
BR,
Carl
Edited by: carl_abramsson on 2011-nov-14 06:16Hi Charlie,
Thank you for your answer! Yes, actually we are using a lot of expiry and many of the items are created at roughly the same time! We haven't configured expiry in the cache configuration, instead we do a put with a expire.
Regarding the workload, compared to our peak hours it has been very low when had problems with the size and clear operations. So the backing tier isn't really doing much at the time. That's what has been so strange with this problem.
The release going live today has PRESENT as near cache invalidation strategy. We remove as much of the expire as possible in the next.
BR,
Carl -
Hi,
I am trying to analyze Query Performance in ST03N.
I run a Query with Property = Cache Inactive.
I get Runtime as say 180 sec.
Again when I run the same Query the Runtime is getting decreased .Now it is say 150 sec.
Pls expalin me the reason for the less runtime when there is No Cache used.Could be two things occurring. When you say cache is inactive - that is probably referring to the OLAP Global Cache. The use session, still has alocal cache so that if they run execute the same navigation in the same session, results are retrieved from local cache.
Also,as the others have mentioned, if the same DB query is really being executed on the DB a second time, not only is the SQL stmt parsing/optimization not need to occur, but almost certainly some/all of the data still remains in the DB's buffer, so it only needs to be retrieved from memory, rather than physically reading the disk again. -
Aggregate query on global cache group table
Hi,
I set up two global cache nodes. As we know, global cache group is dynamic.
The cache group can be dynamically loaded by primary key or foreign key as my understanding.
There are three records in oracle cache table, and one record is loaded in node A, and the other two records in node B.
Oracle:
1 Java
2 C
3 Python
Node A:
1 Java
Node B:
2 C
3 Python
If I select count(*) in Node A or Node B, the result respectively is 1 and 2.
The questions are:
how I can get the real count 3?
Is it reasonable to do this query on global cache group table?
I have one idea that create another read-only node for aggregation query, but it seems weird.
Thanks very much.
Regards,
Nesta
Edited by: user12240056 on Dec 2, 2009 12:54 AMDo you mean something like
UPDATE sometable SET somecol = somevalue;
where you are updating all rows (or where you may use a WHERE clause that matches many rows and is not an equality)?
This is not something you can do in one step with a GLOBAL DYNAMIC cache group. If the number of rows that would be affected is small and you know the keys or every row that must be updated then you could simply execute multiple individual updates. If the number of rows is large or you do not know all the ketys in advance then maybe you would adopt the approach of ensuring that all relevant rows are in the local cache grid node already via LOAD CACHE GROUP ... WHERE ... Alternatively, if you do not need Grid functionality you could consider using a single cache with a non-dynamic (explicitly loaded) cache group and just pre-load all the data.
I would not try and use JTA to update rows in multiple grid nodes in one transaction; it will be slow and you would have to know which rows are located in which nodes...
Chris -
Hi
I am newbie to Oracle Coherence and trying to get a hands on experience by running a example (coherence-example-distributedload.zip) (Coherence GE 3.6.1). I am running two instances of server . After this I ran "load.cmd" to distribute data across two server nodes - I can see that data is partitioned across server instances.
Now I run another instance(on another JVM) of program which will try to join the distributed cache and try to query on the loaded on server instances. I see that the new JVM is joining the cluster and querying for data returns no records. Can you please tell me if I am missing something?
NamedCache nNamedCache = CacheFactory.getCache("example-distributed");
Filter eEqualsFilter = new GreaterFilter("getLocId", "1000");
Set keySet = nNamedCache.keySet(eEqualsFilter);
I see here that keySet has no records. Can you please help?
Thanks
sunderI got this problem sorted out - the was problem cache-config.xml.. The correct one looks as below.
<distributed-scheme>
<scheme-name>example-distributed</scheme-name>
<service-name>DistributedCache1</service-name>
<backing-map-scheme>
<read-write-backing-map-scheme>
<scheme-name>DBCacheLoaderScheme</scheme-name>
<internal-cache-scheme>
<local-scheme>
<scheme-ref>DBCache-eviction</scheme-ref>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>com.test.DBCacheStore</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>locations</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>{cache-name}</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
<cachestore-timeout>6000</cachestore-timeout>
<refresh-ahead-factor>0.5</refresh-ahead-factor>
</read-write-backing-map-scheme>
</backing-map-scheme>
<thread-count>10</thread-count>
<autostart>true</autostart>
</distributed-scheme>
<invocation-scheme>
<scheme-name>example-invocation</scheme-name>
<service-name>InvocationService1</service-name>
<autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
</invocation-scheme>
Missed <class-scheme> element inside <cachestore-scheme> of <read-write-backing-map-scheme>.
Thanks
sunder -
Query Problem Oracle9i Release 9.2.0.7.0
Hello,
I need help with this one. I have xml file like this:
<?xml version="1.0" ?>
- <TecKomBS xmlns="http://www.bsi.si" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.bsi.si http://www.bsi.si/_data/tecajnice/TecKomBS-l.xsd">
- <tecajnica datum="2006-12-29" veljavnost="2007-01-01">
- <tecaj oznaka="GBP" sifra="826">
<nakupni>0.67251</nakupni>
<prodajni>0.67049</prodajni>
</tecaj>
- <tecaj oznaka="CZK" sifra="203">
<nakupni>27.526</nakupni>
<prodajni>27.444</prodajni>
</tecaj>
- <tecaj oznaka="NOK" sifra="578">
<nakupni>8.2504</nakupni>
<prodajni>8.2256</prodajni>
</tecaj>
- <tecaj oznaka="SEK" sifra="752">
<nakupni>9.0540</nakupni>
<prodajni>9.0268</prodajni>
</tecaj>
- <tecaj oznaka="JPY" sifra="392">
<nakupni>157.17</nakupni>
<prodajni>156.69</prodajni>
</tecaj>
- <tecaj oznaka="HUF" sifra="348">
<nakupni>252.15</nakupni>
<prodajni>251.39</prodajni>
</tecaj>
- <tecaj oznaka="AUD" sifra="036">
<nakupni>1.6716</nakupni>
<prodajni>1.6666</prodajni>
</tecaj>
- <tecaj oznaka="CAD" sifra="124">
<nakupni>1.5304</nakupni>
<prodajni>1.5258</prodajni>
</tecaj>
- <tecaj oznaka="SKK" sifra="703">
<nakupni>34.487</nakupni>
<prodajni>34.383</prodajni>
</tecaj>
- <tecaj oznaka="PLN" sifra="985">
<nakupni>3.8367</nakupni>
<prodajni>3.8253</prodajni>
</tecaj>
- <tecaj oznaka="DKK" sifra="208">
<nakupni>7.4672</nakupni>
<prodajni>7.4448</prodajni>
</tecaj>
- <tecaj oznaka="HRK" sifra="191">
<nakupni>7.3614</nakupni>
<prodajni>7.3394</prodajni>
</tecaj>
- <tecaj oznaka="USD" sifra="840">
<nakupni>1.319</nakupni>
<prodajni>1.315</prodajni>
</tecaj>
- <tecaj oznaka="CHF" sifra="756">
<nakupni>1.6093</nakupni>
<prodajni>1.6045</prodajni>
</tecaj>
</tecajnica>
- <tecajnica datum="2007-01-02" veljavnost="2007-01-03">
- <tecaj oznaka="GBP" sifra="826">
<nakupni>0.67451</nakupni>
<prodajni>0.67249</prodajni>
</tecaj>
- <tecaj oznaka="CZK" sifra="203">
<nakupni>27.566</nakupni>
<prodajni>27.484</prodajni>
</tecaj>
- <tecaj oznaka="NOK" sifra="578">
<nakupni>8.2203</nakupni>
<prodajni>8.1957</prodajni>
</tecaj>
- <tecaj oznaka="SEK" sifra="752">
<nakupni>9.038</nakupni>
<prodajni>9.011</prodajni>
</tecaj>
- <tecaj oznaka="JPY" sifra="392">
<nakupni>158.000</nakupni>
<prodajni>157.520</prodajni>
</tecaj>
- <tecaj oznaka="HUF" sifra="348">
<nakupni>251.82</nakupni>
<prodajni>251.06</prodajni>
</tecaj>
- <tecaj oznaka="AUD" sifra="036">
<nakupni>1.6719</nakupni>
<prodajni>1.6669</prodajni>
</tecaj>
- <tecaj oznaka="CAD" sifra="124">
<nakupni>1.5475</nakupni>
<prodajni>1.5429</prodajni>
</tecaj>
- <tecaj oznaka="SKK" sifra="703">
<nakupni>34.435</nakupni>
<prodajni>34.331</prodajni>
</tecaj>
- <tecaj oznaka="PLN" sifra="985">
<nakupni>3.8344</nakupni>
<prodajni>3.8230</prodajni>
</tecaj>
- <tecaj oznaka="DKK" sifra="208">
<nakupni>7.4678</nakupni>
<prodajni>7.4454</prodajni>
</tecaj>
- <tecaj oznaka="HRK" sifra="191">
<nakupni>7.3735</nakupni>
<prodajni>7.3515</prodajni>
</tecaj>
- <tecaj oznaka="USD" sifra="840">
<nakupni>1.329</nakupni>
<prodajni>1.325</prodajni>
</tecaj>
- <tecaj oznaka="CHF" sifra="756">
<nakupni>1.6128</nakupni>
<prodajni>1.6080</prodajni>
</tecaj>
</tecajnica>
+ <tecajnica datum="2007-01-03" veljavnost="2007-01-04">
+ <tecajnica datum="2007-01-04" veljavnost="2007-01-05">
+ <tecajnica datum="2007-01-05" veljavnost="2007-01-06">
+ <tecajnica datum="2007-01-08" veljavnost="2007-01-09">
+ <tecajnica datum="2007-01-09" veljavnost="2007-01-10">
under every "tecajnica" from witch I want to get "datum" as date I have more data,
first "tecaj" from witch I want to get "sifra" and under "tecaj" first I have "nakupni" from witch I want to get value of "nakupni" and second I have "prodajni" from witch I want to get value of "prodajni".
So the data I want would look like this:
For every "datum" I want "sifra", "nakupni" and "prodajni"
If I have in my xml file only one "datum" for "tecajnica" this is not a problem, I get this with this query:
SELECT
extractValue(Value(d), 'tecajnica/.@datum','xmlns="http://www.bsi.si"') datum,
extractValue(VALUE(p), 'tecaj/.@sifra','xmlns="http://www.bsi.si"') sval,
extractValue(VALUE(p), 'tecaj/nakupni','xmlns="http://www.bsi.si"') nakupni,
extractValue(VALUE(p), 'tecaj/prodajni','xmlns="http://www.bsi.si"') prodajni
FROM
Xml_tab x,
TABLE(XMLSequence(extract(x.xml, '/TecKomBS/tecajnica','xmlns="http://www.bsi.si"'))) d,
TABLE(XMLSequence(extract(x.xml, '/TecKomBS/tecajnica/*','xmlns="http://www.bsi.si"'))) p
but if I have xml file like example on top (with more dates), I get mix of all data in the xml file. So I get alot of wrong data!
I have tried some examples with Xqery but I gues I am doing something wrong.
Can you pleeease help me ?
Thanks for your time
JI have tried also this, but I get error that doesn't mean nothing:
select xmlquery('for $i in $d/TecKomBS/Tecajnica where $i/.@datum eq
"2006-01-26" return $d/TecKomBS/Tecajnica/.@datum'
passing t.XML as "d" returning content)
from Xml_tab t
I get this:
SQL> /
passing t.XML as "d" returning content)
ERROR at line 2:
ORA-00907: missing right parenthesis
Can you help me please ? -
2 Query Problems....
Hi there,
i've got two Major Problems. I'm a relativly newbie to SAP BW and the Query Designer.
First Problem:
My Query looks Like this
Year-----Sales--Change
2003---1000 u20AC-0%
2004---2000 u20AC-100%
2005---3000 u20AC-50%
Now i wanna have a third Column in witch the Change of the Sales is displayed in correlation to the year before.
l will try to eyplain it a litte bit. When my Sales were 1000 u20AC in year 2003 then i wanna calculate the increase in percent in correlation to the year before. Is this posiible???
An now the second problem ):
I got another Query in wich i just wanna to show the overall result in a Diagramm. When i calculate the Overall Result, every result is shown and actually is also shown in the diagram.
I try to explain this a bit more:
Year--Header1---Header2
2003--data--
data
2004--data--
data
2006--data--
data
O.Result---result1--result 2
So i Just want to show result1 and result2 in a diagram. How do i do this?
Thanks in advance for some answers .Hi,
For you first problem, In column, in your structure, create a selection with the current year for example and another selection with previous year and in the third column create a formula to calculate the progression.
For your second problem, hide year from your query an you will have only global result.
Regards
Romain. -
Store Query Output or Cache Query
My search query returns almost about 5000 records or more at
a time out of 7million records. and I am outputing about 35 records
at a time on a page. The problem is when the user pages to the next
page, I hit the DB again and again through the paging process of
the user.
The Cache attribute of <cfquery> does not work when you
are using <cfqueryparam> methods in the Components which we
are using.
How can I Store all the outputs of the Query returned into a
var and cycle through it instead of hitting the DB every time or is
there another method of caching this returned query output?
Please give example.Thanks, that did the trick, another problem that came out of
this is Internet Explorer. When Users page to lets say page #6 and
then use the back button to come back, when it reaches the page
one,the Query executes again. This only happens with internet
explorer. I have tried caching the page but still same problem
persists. I have changed the form method to "get" and it is still
problematic.
Maybe you are looking for
-
Convert xml file to a 2D list - Urgent!!!
I have a xml file like: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE map SYSTEM "map.dtd"> <map width="5" height="3" goal="6" name="Hallways of Dooom"> <random-item type='lantern' amount='5' /> <random-item type='health' amount='10' />
-
Can't empty the trash on my mac os 10.8.5 Reason provided by the system: The operation can't be completed because you don't have permission to access some of the items. and it is my computer. Help!
-
How to Include libraries (Jar's/Packages/ClassNotFoundException/Classpath)
Ive spent hours looking for a solution, So maby i making a simple human error which i cant find. Im unfamiliar with using packages. I have created a program that use the JDBC libary. However i need to distribute this program and keep getting a ClassN
-
Hi Friends I have a javacard application which write and reads to/from the smartcard.My card supports Javacard 2.2.1 and GP 2.1.1 ,but from my experience it seems no supporting GP( it is not working GPShell sowing 6d00 error and also giving error und
-
I am not at my computer right now but I have Firefox version 3.6 (not 4). When I install the latest Adobe flash (version 10.something), my computer crashes whenever I go to any website with any form of information or data that requires the Adobe flas