Fetching Mutilple records in Alert.
Hi All,
I have a requirement to print the keymembers of a project who are end-dated through Alert. If a single member is end-dated I was able to print, but if there are mutiple users end-dated then i'm facing a challenge.
Is there any way to print multiple records fecthed by the SQL statement in the Alert.
Regards,
Jana
Unfortunately, the current version of the Graph Client Library doesn't support subsequent page sizes > 100 (though the service side itself will allow you to go up to 999, so if you make the call without the client library, you
should be able to go higher).
I've provided feedback on this for potential improvements in future releases of the GCL.
Similar Messages
-
Fetching many records all at once is no faster than fetching one at a time
Hello,
I am having a problem getting NI-Scope to perform adequately for my application. I am sorry for the long post, but I have been going around and around with an NI engineer through email and I need some other input.
I have the following software and equipment:
LabView 8.5
NI-Scope 3.4
PXI-1033 chassis
PXI-5105 digitizer card
DELL Latitude D830 notebook computer with 4 GB RAM.
I tested the transfer speed of my connection to the PXI-1033 chassis using the niScope Stream to Memory Maximum Transfer Rate.vi found here:
http://zone.ni.com/devzone/cda/epd/p/id/5273. The result was 101 MB/s.
I am trying to set up a system whereby I can press the start button and acquire short waveforms which are individually triggered. I wish to acquire these individually triggered waveforms indefinitely. Furthermore, I wish to maximize the rate at which the triggers occur. In the limiting case where I acquire records of one sample, the record size in memory is 512 bytes (Using the formula to calculate 'Allocated Onboard Memory per Record' found in the NI PXI/PCI-5105 Specifications under the heading 'Waveform Specifications' pg. 16.). The PXI-5105 trigger re-arms in about 2 microseconds (500kHz), so to trigger at that rate indefinetely I would need a transfer speed of at least 256 Mb/s. So clearly, in this case the limiting factor for increasing the rate I trigger at and still be able to acquire indefinetely is the rate at which I transfer records from memory to my PC.
To maximize my record transfer rate, I should transfer many records at once using the Multi Fetch VI, as opposed to the theoretically slower method of transferring one at a time. To compare the rate that I can transfer records using a transfer all at once or one at a time method, I modified the niScope EX Timestamps.vi to allow me to choose between these transfer methods by changing the constant wired to the Fetch Number of Records property node to either -1 or 1 repectively. I also added a loop that ensures that all records are acquired before I begin the transfer, so that acquisition and trigger rates do not interfere with measuring the record transfer rate. This modified VI is attached to this post.
I have the following results for acquiring 10k records. My measurements are done using the Profile Performance and Memory Tool.
I am using a 250kHz analog pulse source.
Fetching 10000 records 1 record at a time the niScope Multi Fetch
Cluster takes a total time of 1546.9 milliseconds or 155 microseconds
per record.
Fetching 10000 records at once the niScope Multi Fetch Cluster takes a
total time of 1703.1 milliseconds or 170 microseconds per record.
I have tried this for larger and smaller total number of records, and the transfer time per is always around 170 microseconds per record regardless if I transfer one at a time or all at once. But with a 100MB/s link and 512 byte record size, the Fetch speed should approach 5 microseconds per record as you increase the number of records fetched at once.
With this my application will be limited to a trigger rate of 5kHz for running indefinetely, and it should be capable of closer to a 200kHz trigger rate for extended periods of time. I have a feeling that I am missing something simple or am just confused about how the Fetch functions should work. Please enlighten me.
Attachments:
Timestamps.vi 73 KBHi ESD
Your numbers for testing the PXI bandwidth look good. A value of
approximately 100MB/s is reasonable when pulling data accross the PXI
bus continuously in larger chunks. This may decrease a little when
working with MXI in comparison to using an embedded PXI controller. I
expect you were using the streaming example "niScope Stream to Memory
Maximum Transfer Rate.vi" found here: http://zone.ni.com/devzone/cda/epd/p/id/5273.
Acquiring multiple triggered records is a little different. There are
a few techniques that will help to make sure that you are able to fetch
your data fast enough to be able to keep up with the acquired data or
desired reference trigger rate. You are certainly correct that it is
more efficient to transfer larger amounts of data at once, instead of
small amounts of data more frequently as the overhead due to DMA
transfers becomes significant.
The trend you saw that fetching less records was more efficient sounded odd. So I ran your example and tracked down what was causing that trend. I believe it is actually the for loop that you had in your acquisition loop. I made a few modifications to the application to display the total fetch time to acquire 10000 records. The best fetch time is when all records are pulled in at once. I left your code in the application but temporarily disabled the for loop to show the fetch performance. I also added a loop to ramp the fetch number up and graph the fetch times. I will attach the modified application as well as the fetch results I saw on my system for reference. When the for loop is enabled the performance was worst at 1 record fetches, The fetch time dipped around the 500 records/fetch and began to ramp up again as the records/fetch increases to 10000.
Note I am using the 2D I16 fetch as it is more efficient to keep the data unscaled. I have also added an option to use immediate triggering - this is just because I was not near my hardware to physically connect a signal so I used the trigger holdoff property to simulate a given trigger rate.
Hope this helps. I was working in LabVIEW 8.5, if you are working with an earlier version let me know.
Message Edited by Jennifer O on 04-12-2008 09:30 PM
Attachments:
RecordFetchingTest.vi 143 KB
FetchTrend.JPG 37 KB -
Product.Category dimension has 4 child nodes Accessories,Bikes,Clothing n Components.My problem is when I have thousands of first level nodes my application takes a lot of time to load. Is there a way to fetch only say 100 records at a time? So then when
i click a next button i get the next 100
Eg:On the 1st click of a button I fetch 2 members
WITH MEMBER [Measures].[ChildrenCount] AS
[Product].[Category].CurrentMember.Children.Count
SELECT [Measures].[ChildrenCount] ON 1
,TopCount([Product].[Category].Members, 2) on 0
FROM [Adventure Works]
This fetches only Accessories. Is there a way the fetch the next two records Bikes n Clothing on click.
Then Components on the next click. So on an so forth.Hi Tsunade,
According to your description, there are thousands of members on your cube. It will take long time to retrieve all the member at a time, in order to improve the performance, you are looking for a function to fetch 10 records at a time, right? Based on my
research, there is no such a functionally to work around this requirement currently.
If you have any concern about this behavior, you can submit a feedback at
http://connect.microsoft.com/SQLServer/Feedback and hope it is resolved in the next release of service pack or product. Your feedback enables Microsoft to make software and services the best that they can be, Microsoft might consider to add this feature
in the following release after official confirmation.
Regards,
Charlie Liao
TechNet Community Support -
Can we split and fetch the records in Database Adapter
Hi,
I designed a Database Adapter to fetch the records from oracle Database. Some time, the Database Adapter need to fetch around 5000, or 10,000 records in single shot. In that case my BPEL process is choking and getting error as
java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
Could someone help me to resolve this?
In Database Adapter can we split and fetch the records, if number of records more then 1000.
ex. First 100 rec as one set and next 100 as 2nd set like this.
Thank you.You can send the records as batches useing the debatching feature of db adapter. Refer documentation for implementation details.
-
Query takes too much time in fetching last records.
Hi,
I am using oracle 8.1 and trying to execute a SQL statement, it takes few minutes and display records.
When trying to fetch all the records, it is fast up to some level and takes much time to fetch last record.
Ex: If total records = 16336 , then it fetches records faster up to 16300 and takes app.500 sec to fetch last 36 records.
Could you kindly let me know the reason.
I have copied the explain plan below for your reference.Please let me know if anything is required.
SELECT STATEMENT, GOAL = RULE 4046 8 4048
NESTED LOOPS OUTER 4046 8 4048
NESTED LOOPS OUTER 4030 8 2952
FILTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER 4014 8 1728
NESTED LOOPS 3998 8 936
TABLE ACCESS BY INDEX ROWID IFSAPP CUSTOMER_ORDER_TAB 3966 8 440
INDEX RANGE SCAN IFSAPP CUSTOMER_ORDER_1_IX 108 8
TABLE ACCESS BY INDEX ROWID IFSAPP CUSTOMER_ORDER_LINE_TAB 4 30667 1901354
INDEX RANGE SCAN IFSAPP CUSTOMER_ORDER_LINE_PK 3 30667
TABLE ACCESS BY INDEX ROWID IFSAPP PWR_CONS_PARCEL_CONTENT_TAB 2 2000 198000
INDEX RANGE SCAN IFSAPP PWR_CONS_PARCEL_CONTENT_1_IDX 1 2000
TABLE ACCESS BY INDEX ROWID IFSAPP PWR_CONS_PARCEL_TAB 1 2000 222000
INDEX UNIQUE SCAN IFSAPP PWR_CONS_PARCEL_PK 2000
TABLE ACCESS BY INDEX ROWID IFSAPP CONSIGNMENT_PARCEL_TAB 1 2000 84000
INDEX UNIQUE SCAN IFSAPP CONSIGNMENT_PARCEL_PK 2000
TABLE ACCESS BY INDEX ROWID IFSAPP PWR_OBJECT_CONNECTION_TAB 2 20 2740
INDEX RANGE SCAN IFSAPP PWR_OBJECT_CONNECTION_IX1 1 20 Thanks.We are using PL/SQL Developer tool. The time what we have mentioned in the post is approximated time.
Apologies for not mentioning these details in previous thread.Let it be approximate time but how did you arrive at that time? When a query fetches records how did you derived that one portion is fetched in x and the remaining in y time limit?
I would suggest this could be some issue with PL/SQL Developer (Never used this tool by myself) But for performance testing i would suggest you to use SQL Plus. Thats the best tool to test performance. -
How can we split a select query to 3 or 4 if it is fetching much records?
I am running a query like:
select * from table_name
it will be fetching 152940696 records. Now i want to fetch this result as 3 or 4 select statements. That is, in the second query I want to fetch the records from where i stopped in the first query. and similar for the 3rd i have to continue from the 2nd query. And for the 4th query i have to start from where i have stopped in the 3rd query.
when i tried with rownum we can fetch the records upto < or <= to a particular count like 100000000. But above this count i cannot fetch using rownum. Because > or >= wont work with rownum.
Is there anyother way to split the select query as i explained.
Thanks in advanceI'll assume you want to split the query up for performance reasons.
The easiest way to do this if you have the license is to use the parallel query option, which can help, hurt or do nothing. The only way to find out is to try. PQO would be best from a performance standpoint if possible, provided it will do what you need.
Failing that as has been suggested you need a logical, scalable way to divide up the queries. It has already been pointed out that the rownum solution probably will not work correctly. Also, the MINUS with ROWNUM idea has the disadvantage of having to read a lot of the same data twice, making the query run longer.
Perhaps a range would provide a way to split up the data - something like
select whatever
from table
where primary_key < 10000000;
select whatever
from table
where primary key between 10000001 and 199999999;
... -
Hi,
Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
The table has the following key Columns for the Select (sample Table)
Client_Visit
ID Number(12,0) --sequence generated number
EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
Create_TS Timestamp(6)
Client_ID Number(9,0)
Cascade Flg vahrchar2(1)
On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
Code 1:
SELECT au_subtyp1.au_id_k,
au_subtyp1.pgm_struct_id_k
FROM au_subtyp au_subtyp1
WHERE au_subtyp1.create_ts =
(SELECT MAX (au_subtyp2.create_ts)
FROM au_subtyp au_subtyp2
WHERE au_subtyp2.au_id_k =
au_subtyp1.au_id_k
AND au_subtyp2.create_ts <
TO_DATE ('2013-01-01',
'YYYY-MM-DD'
AND au_subtyp2.eff_dte =
(SELECT MAX
(au_subtyp3.eff_dte
FROM au_subtyp au_subtyp3
WHERE au_subtyp3.au_id_k =
au_subtyp2.au_id_k
AND au_subtyp3.create_ts <
TO_DATE
('2013-01-01',
'YYYY-MM-DD'
AND au_subtyp3.eff_dte < =
TO_DATE
('2012-12-31',
'YYYY-MM-DD'
AND au_subtyp1.exists_flg = 'Y'
Explain Plan
Plan hash value: 2534321861
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 91 | | 33265 (2)| 00:06:40 |
|* 1 | FILTER | | | | | | |
| 2 | HASH GROUP BY | | 1 | 91 | | 33265 (2)| 00:06:40 |
|* 3 | HASH JOIN | | 1404K| 121M| 19M| 33178 (1)| 00:06:39 |
|* 4 | HASH JOIN | | 307K| 16M| 8712K| 23708 (1)| 00:04:45 |
| 5 | VIEW | VW_SQ_1 | 307K| 5104K| | 13493 (1)| 00:02:42 |
| 6 | HASH GROUP BY | | 307K| 13M| 191M| 13493 (1)| 00:02:42 |
|* 7 | INDEX FULL SCAN | AUSU_PK | 2809K| 125M| | 13493 (1)| 00:02:42 |
|* 8 | INDEX FAST FULL SCAN| AUSU_PK | 2809K| 104M| | 2977 (2)| 00:00:36 |
|* 9 | TABLE ACCESS FULL | AU_SUBTYP | 1404K| 46M| | 5336 (2)| 00:01:05 |
Predicate Information (identified by operation id):
1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
"AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
select au_id_k,pgm_struct_id_k from (
SELECT au_id_k
, pgm_struct_id_k
, ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
create_ts, eff_dte,exists_flg
FROM au_subtyp
WHERE create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
AND eff_dte <= TO_DATE('2012-12-31','YYYY-MM-DD')
) d where rn =1 and exists_flg = 'Y'
--Explain Plan
Plan hash value: 4039566059
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2809K| 168M| | 40034 (1)| 00:08:01 |
|* 1 | VIEW | | 2809K| 168M| | 40034 (1)| 00:08:01 |
|* 2 | WINDOW SORT PUSHED RANK| | 2809K| 133M| 365M| 40034 (1)| 00:08:01 |
|* 3 | TABLE ACCESS FULL | AU_SUBTYP | 2809K| 133M| | 5345 (2)| 00:01:05 |
Predicate Information (identified by operation id):
1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
VijayHi Justin,
Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
Both the production and test environment are 3 Node RAC.
First Query...
CPU used by this session 4740
CPU used when call started 4740
Cached Commit SCN referenced 21393
DB time 4745
OS Involuntary context switches 467
OS Page reclaims 64253
OS System time used 26
OS User time used 4562
OS Voluntary context switches 16
SQL*Net roundtrips to/from client 9
bytes received via SQL*Net from client 2487
bytes sent via SQL*Net to client 15830
calls to get snapshot scn: kcmgss 37
consistent gets 52162
consistent gets - examination 2
consistent gets from cache 52162
enqueue releases 19
enqueue requests 19
enqueue waits 1
execute count 2
ges messages sent 1
global enqueue gets sync 19
global enqueue releases 19
index fast full scans (full) 1
index scans kdiixs1 1
no work - consistent read gets 52125
opened cursors cumulative 2
parse count (hard) 1
parse count (total) 2
parse time cpu 1
parse time elapsed 1
physical write IO requests 69
physical write bytes 17522688
physical write total IO requests 69
physical write total bytes 17522688
physical write total multi block requests 69
physical writes 2139
physical writes direct 2139
physical writes direct temporary tablespace 2139
physical writes non checkpoint 2139
recursive calls 19
recursive cpu usage 1
session cursor cache hits 1
session logical reads 52162
sorts (memory) 2
sorts (rows) 760
table scan blocks gotten 23856
table scan rows gotten 2809607
table scans (short tables) 1
user I/O wait time 1
user calls 11
workarea executions - onepass 1
workarea executions - optimal 9
Second Query
CPU used by this session 1197
CPU used when call started 1197
Cached Commit SCN referenced 21393
DB time 1201
OS Involuntary context switches 8684
OS Page reclaims 21769
OS System time used 14
OS User time used 1183
OS Voluntary context switches 50
SQL*Net roundtrips to/from client 9
bytes received via SQL*Net from client 767
bytes sent via SQL*Net to client 15745
calls to get snapshot scn: kcmgss 17
consistent gets 23871
consistent gets from cache 23871
db block gets 16
db block gets from cache 16
enqueue releases 25
enqueue requests 25
enqueue waits 1
execute count 2
free buffer requested 1
ges messages sent 1
global enqueue get time 1
global enqueue gets sync 25
global enqueue releases 25
no work - consistent read gets 23856
opened cursors cumulative 2
parse count (hard) 1
parse count (total) 2
parse time elapsed 1
physical read IO requests 27
physical read bytes 6635520
physical read total IO requests 27
physical read total bytes 6635520
physical read total multi block requests 27
physical reads 810
physical reads direct 810
physical reads direct temporary tablespace 810
physical write IO requests 117
physical write bytes 24584192
physical write total IO requests 117
physical write total bytes 24584192
physical write total multi block requests 117
physical writes 3001
physical writes direct 3001
physical writes direct temporary tablespace 3001
physical writes non checkpoint 3001
recursive calls 25
session cursor cache hits 1
session logical reads 23887
sorts (disk) 1
sorts (memory) 2
sorts (rows) 2810365
table scan blocks gotten 23856
table scan rows gotten 2809607
table scans (short tables) 1
user I/O wait time 2
user calls 11
workarea executions - onepass 1
workarea executions - optimal 5Thanks,
Vijay
Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM -
Select query on QALS table taking around 4 secs to fetch one record
Hi,
I have one select query that takes around 4 secs to fetch one record. I would like to know if there are any ways to reduce the time taken for this select.
SELECT
b~prueflos
b~matnr
b~lagortchrg
a~vdatum
a~kzart
a~zaehler
a~vcode
a~vezeiterf
FROM qals AS b LEFT OUTER JOIN qave AS a ON
bprueflos = aprueflos
INTO TABLE t_qals1
FOR ALL ENTRIES IN t_lgorts
WHERE matnr = t_lgorts-matnr
AND werk = t_lgorts-werks
AND lagortchrg = t_lgorts-lgort
AND stat35 = c_x
AND art IN (c_01,c_08).
When I took the SQL trace, here I found other details :
Column No.Of Distinct Records
MANDANT 2
MATNR 2.954
WERK 30
STAT34 2
HERKUNFT 5
Analyze Method Sample 114.654 Rows
Levels of B-Tree 2
Number of leaf blocks 1.126
Number of distinct keys 16.224
Average leaf blocks per key 1
Average data blocks per key 3
Clustering factor 61.610
Also note, This select query is using INDEX RANGE SCAN QALS~D.
All the suggestions are welcome
Regards,
VijayaHi Rob,
Its strange but, the table t_lgorts has only ONE record
MATNR = 000000000500003463
WERK = D133
LAGORTCHRG = 0001
I have also seen that for the above criteria the table QALS has 2266 records that satisfy this condition.
I am not sure..but if we write the above query as subquery instead of Outer join..will it improve the performance?
Will check it from my side too..
Regards,
Vijaya -
Need query to fetch single record
I am having table t1 with below data
c1 c2
1 10
2 10
when I am using below query
select max(c2) from t1 group by c1
I am geeting both the records.
How can i fetch only record (any one) ?Hi,
Here's one way
WITH got_r_num AS
SELECT c1, c2
, ROW_NUMBER () OVER (ORDER BY c2 DESC) AS r_num
FROM t1
SELECT c1, c2
FROM got_r_num
WHERE r_num = 1
The row displayed will be the one with the highest c2 value (or one of those rows, if there happens to be a tie.)
This is called a Top-N Query, because it picks N items (N=1 in this case) from the top of an ordered list.
Starting in Oracle 12.1, you can also use
SELECT c1, c2
FROM t1
ORDER BY c2 DESC
FETCH FIRST 1 ROW ONLY -
Hello,
Could someone help me please ?
I have a listing of my sales orders and I want to make changes in my order by opening the form and fetched with that record. When I click on that particular orderno in my listing of order and call the form to display the details, it calls the form but says "Query could not fetch the record". I do not know why ? Please help me with the solution.
ThanxHello,
I think you are passing orderno to called form as a parameter. If you are using parameter list check..
1. If parameter data is getting in form correctly ?
2. Next, have you changed where clause of other block,so that is will display record with passed orderno ?
I am expecting more details from you.
Thanx
Adi -
Fetch the records based on number
Hi experts,
I have a req like if user give input as 5..it should fetch 5 records from database.
for example database values are
SERNR MATNR
101 A
102 A
103 A
104 A
105 A
106 A
107 A
If user gives the input as 5 it should fetch 5 records like 101 to 105....Can any body plz help me how to write select query for this..
Thanks in advance,
Veena.
Edited by: s veena on Jan 18, 2011 5:52 AMHi Veena,
You can use UPTO in your select query. For example
SELECT MATNR FROM MARA INTO TABLE IT_MATNR UPTO P_NUMBER ROWS.
"Here P_NUMBER is the Selection Screen Parameter
It will fetch records based on the number in the parameter.
Thanks & Regards,
Faheem. -
say i have emp table
eno ename sales
1 david 1100
2 lara 200
3 james 1000
1 david 1200
2 lara 5400
4 white 890
3 james 7500
1 david 1313
eno can be duplicate
when i give empno is 1
i want to display his sales i.e 1100,1200,1313
first time i will go to database and fetch the records
but next time onwards i dont go to database; i will fetch the records from cache;
i thought doing it using hashmap or hasptable ;both those two don't allow duplicate values(empno has duplicate values);
How to solve this problem.Hi,
Ever considered splitting that table up. You are thinking about caching thats a
very good idea. But doesnt it make it vary evident that the table staructure that you have
keeps a lot of redundant data. Specially it hardly makes a sense to have sales
figures in a emp table. Instead you can have Emp table containing eno and
ename with eno as the primary key and have another table called sales with eno
and sales columns and in this case the eno references the Emp table.
If you still want to continue with this structure then I think you can go ahead with
the solution already suggested to you
Aviroop -
Fetch last record from database table
hi,
how to fetch last record from database table.
plz reply earliest.
Regards,
Jyotsna
Moderator message - Please search before asking - post locked
Edited by: Rob Burbank on Dec 11, 2009 9:44 AMabhi,
just imagine the table to be BSEG or FAGLFLEXA... then what would be performance of the code ?
any ways,
jyotsna, first check if you have a pattern to follow like if the primary key field have a increasing number range or it would be great if you find a date field which stores the inserted date or some thing..
you can select max or that field or order by descending using select single.
or get all data.. sort in descending order.(again you need some criteria like date).
read the first entry. using read itab index 1 -
Placeholder fetching multiple records
Hi,
From the below piece of code in placeholder i am fetching multiple records.
How can i show those multiple records to the user?
<cq:contentQuery logic="and">
<cq:contentPropertyComparison logic="and" propertySet="UCMRepository/IDC:Folder" name="dDocAuthor" type="string">
<cq:equals>
<cq:literal>weblogic</cq:literal>
</cq:equals>
</cq:contentPropertyComparison>
</cq:contentQuery>
Thanks in Advance,
Venu--Placeholders will, by design, select and display a single returned content item. If you would like to render more items, look to the Content Selector technology rather than Placeholder.
This document might be of assistance: [Choosing the Type of Interaction Management to Develop|http://download.oracle.com/docs/cd/E13155_01/wlp/docs103/interaction/interaction.html#wp1009277]
Brad -
Hi All,
I want to fetch some records from a database table which are the latest entries. How can I fetch these records?
Regards,
JeetuHi,
Method 1:
1) check whether you have DATE field in your database table.
2) fetch all records into your internal table.
3) sort internal table by date descending.
4) you will get the latest records on top in your internal table.
Method 2:
If you want only latest 10 records from your internal table
data: begin of itab occurs 10,
Declare your fields here make sure that you have DATE field in your internal table.
end of itab.
Select <fields> from <database table> into itab.
append itab sorted by date.
endselect.
note: Date should be one of teh fields of itab.
Maybe you are looking for
-
The CD/DVD Drive on my H215 Desktop takes a while to properly read the disc. When I insert a music CD to play, the music begins right away bit the CD has not "seated" properly and the music "skips". The light on the CPU blinks until the CD is seate
-
Greedings, How is it possible to do it in the follow query? DECLARE temp VARCHAR2(170); temp2 VARCHAR2(50); query1 VARCHAR2(50); BEGIN query1:='Table1'; temp:='Select distinct amount from '||query1||' where amount= '||'30'; EXECUTE IMMEDIATE temp INT
-
Iphone 5 nach os-update (ios7) funktionieren die WebDAV Kalender nicht mehr!
bei dem 6.1.4 gab's KEINE Probleme! Downgrade ist nicht möglich, wie kann ich jetzt damit arbeiten?!?!?!
-
How do I transfer videos from my computor to my iphone?
-
Rate of transfer of message from jms messaging bridge
hi , the domain description given below domain1 adm_domain1 managed01_domain1 managed02_domain1 cluster_domain1 consists of managed01_domain1,managed02_domain1 I have one distributed queue say queueA, with two member queue queueA-managed01 queueA-man