Best way of filtering records in a block
Hi,
i have a form with a data block that i want to filter. when i launch the form all records are fetched but then i want to choose the records displayed base on some criteria. Can i apply this criteria to the block so that it is not necessary to requery the database? Which is the best way to achieve this?
Best regards,
Bruno Martins
To save network traffic, you may search from the form memory. Since you first got all deptments and its employees, which stays in form memory. then you should not use set_block_property('..',default_where), which will requery database.
Then you may add another button inside add ENTER_QUERY;
then in the when-list-changed trigger,
GO_ITEM('emp.deptno');
:emp.deptno := :delptno_list;
execute_query;
This way will just search among records in memory. But you may payoff with another ENTER_QUERY button or use default menu bar ENTER QUERY button item.
Similar Messages
-
Hi all,
I am working on form developed by some other party.
In this form one is control block which contains a combo box.
There is another data block which is based on a table.
When I run the form the records in this block are filtered by the value chosen in combo box.
But I am not able to find the code where this functionality is achieved.
I have checked WHERE clause property of block and the property is null.
Also default_where is not set at runtime.
Is there any other way to filter records in a block?
Thanks in advance.
MKHi there,
If there a Push Button on the Control Block to initiate the Execute query for the data block, then the code is in the When-Button-pressed trigger.
If there is no Push Button and if the data block (based on table) changes after a new value is chosen from the combo box, then the default_where or onetime_where code may be hiding behind a When-Validate-item trigger (or some other trigger) in the combo box.
In our application, we have many 'search' fields in the control block. We use a Push Button to build the dynamic where clause for the data block.
Regards,
John -
Need help knowing best way to create record
What is the best way to create a new record... I have a form pulling info from about 20 tables. When the user wants to create a new record, I want a new form to open and allow them to select the values based on the foreign key fields so it makes more sense than just random numbers.
The problem I'm having is knowing when and how to insert. Should the "new record" form be it's own database block that inserts from within it. (but then when they go back to the original form they must requery to see the new value.) Or what about copying each field back to the original form. I'm new to forms and would appreciate any insight and tips.would a wire like this help me?
I doubt it. You want FireWire.
Take look at the Canopus ADVC300. It comes with a nice Macintosh application that works flawlessly with iMovie 06.
http://www.canopus.com/products/ADVC300/index.php
Yes, it does cost more but it works. -
Best way to Insert Records into a DB
hi
i have around 400000(4 lakh ) records to insert into a db , whats the optimal way to do it ,
i tried it doin g using threads i could gain only 2 seconds for 4 lakh records ,suggest me a better wayVery hard thread, poor informations u give us can not help me too much to understand really the problem.
Where do u have to 40.000K (input) records?
Those records must be only added to output table?
How many rows, and how many indexes output table has?
The cost oh each insert depends also on how many indexes dbms has
to manage on oupy table.
In general to take about 2 seconds to add to a table a large amount of rows depends on many variables (hardware performances, dbms performances and so on)
If u ahve only to insert and your input secord statys on another table,
I think that a performance way to do it is insert select, so
insert into output select * from input
If your input records statys on a text file, the best way is to use
native dbms importer
Let me know something more....
Regards -
Best way to mic/record acoustic guitar in song with wide dynamic range
Hey Everyone,
I'm currently working on recording an original song for acoustic guitar and voice. I'm running into trouble though because the guitar part for the verse and refrain are both quiet and understated, but I have this bridge section I do that is very loud. I'm mic-ing my guitar with an AKG 200 Perception Large Diaphragm condenser mic (round-about the 12th fret), and also via my built-in-mic in my guitar, both of which I run directly into my Presonus Firepod. I then record on two tracks at the same time, one for the AKG, and one for the built in mic. But, I'm having to set the sensitivity so low so as not to clip/distort during the loud bridge that I'm just getting a poor sound on the quiet verses and refrains. What's the best way to deal with this? Have a pair of tracks for the quiet parts, and a separate pair for the loud parts? But then I end up with an inconsistent guitar sound.... I'm really fairly new to both the recording process as well as Logic Pro 7. ANY AND ALL suggestions and/or resources are welcome! Thansk very much.
David
MacBook Mac OS X (10.4.9) 2.0 GHz Processor, 2 GB RAM, Logic Pro 7
MacBook Mac OS X (10.4.9) 2.0 GHz Processor, 2 GB RAMHi David,
When you say "you're getting poor sound" when it's turned down, is it really poor sound? Or are your ears favoring the other because it's louder? This is a common mistake with beginners. Louder isn't better, it's just louder.
You may need to experiment, but the first thing I would do is move the mic back a bit. Give your guitar some room to develop its sound, before it hits the mic. This will not only make the guitar sound more natural, but will buy you a little room with dynamics.
Normally, compressors are used to help tame the dynamics of a thing like this, but if you do not have decent quality hardware compressors, I would be patient, and find the best "middle ground" you can.
Keep in mind, when mixing, you can use automation and compression to even out the sections somewhat. Don't expect it to "go to tape" dynamically perfect. It doesn't always work that way.
But you could certainly record this is 2 sections, just make sure the ONLY difference is in the level of the pre-amps going to Logic. Make sure the guitar/microphone distance stays the same.
Then when mixing, you have control over the levels of the different sections, and for all intensive purposes, they should sound the same... meaning the same guitar in the same room... just played louder/softer, which by itself is two totally different sounds. Don't expect the softer parts to sound like the louder parts. they simply can't.. and more importantly, they shouldn't. That's the beauty of any acoustic instrument. -
Best way of delete records??
Hi,
I'm having a performance problem, i need to del 8 million records from a table that as more then 50 millions. The table has partitions by the column that is used in the delete where clause. I created the following procedure to do it:
procedure del_pay(v_year in number, v_month in number) is
cursor c_pay is
select rowid from payments
where am_integ = ((v_year * 100) + v_month);
x number;
y number;
TYPE t_rowid IS TABLE OF VARCHAR2(100)
index by binary_integer;
tab_rowid t_rowid;
begin
open c_pay;
FETCH c_pay BULK COLLECT INTO TAB_ROWID;
close c_pay;
y := tab_rowid.FIRST;
x := 50000;
loop
exit when x > tab_rowid.last;
FORALL I IN y..x
delete from payments d
where d.rowid = tab_rowid(i);
commit;
y := x + 1;
x := x + 50000;
END LOOP;
FORALL I IN y..tab_rowid.last
delete from payments d
where d.rowid = tab_rowid(i);
tab_rowid.delete;
dbms_session.free_unused_user_memory;
commit;
end del_pay;
is there other way of mking this to run faster?
thanks for your help.I think you could try using a DELETE statement instead of going through a cursor loop. Check if an INDEXES are present, if possible try to DISABLE those indexes.
Thanks -
Best way to obtain records that are NOT in another table
I have two rather large tables in oracle. An Account table that has millions of rows. Each account may be enrolled into a particular program and therefore can also be in an Enrollment table, also with millions of rows. I'm trying to find the most optimal way to find any accounts in ACCOUNT that are NOT in the Enrollment table.
I was doing something like this:
select /*+ index(ACCOUNT idx_acct_no) */
a.acct_no
from ACCOUNT a
where a.acct_no not in (Select e.acct_no from ENROLLMENT e);
This takes a VERY long time to execute, even though I am using the index.
I even tried to use the PK on the ACCOUNT table, as it is also a FK on the ENROLLMENT table as such:
select a.acct_no
from ACCOUNT a
where a.id not in (Select e.id from ENROLLMENT e);
this too takes too long to get back (if at all).
Is there a better way to do this selection please?Well if you have the energy to type in the whole list, the syntax you've given will work, unless you blow the permitted number of elements.
But a practical solution would be to turn the list into a table. You still haven't got the hang of this "giving us enough information" concept, so let's presume:
(1) you're on a version of the databasse whoch is 9i or higher
(2) you have this list in a file of some sort.
In which case use an external table or perhaps a pipelined function to generate output which can be used in a SQL statement.
If neither of these solutions works for you please provide sufficient information for us to answer your question correctly. Your future co-operation is appreciated.
cheers, APC -
Hi,
Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
The table has the following key Columns for the Select (sample Table)
Client_Visit
ID Number(12,0) --sequence generated number
EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
Create_TS Timestamp(6)
Client_ID Number(9,0)
Cascade Flg vahrchar2(1)
On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
Code 1:
SELECT au_subtyp1.au_id_k,
au_subtyp1.pgm_struct_id_k
FROM au_subtyp au_subtyp1
WHERE au_subtyp1.create_ts =
(SELECT MAX (au_subtyp2.create_ts)
FROM au_subtyp au_subtyp2
WHERE au_subtyp2.au_id_k =
au_subtyp1.au_id_k
AND au_subtyp2.create_ts <
TO_DATE ('2013-01-01',
'YYYY-MM-DD'
AND au_subtyp2.eff_dte =
(SELECT MAX
(au_subtyp3.eff_dte
FROM au_subtyp au_subtyp3
WHERE au_subtyp3.au_id_k =
au_subtyp2.au_id_k
AND au_subtyp3.create_ts <
TO_DATE
('2013-01-01',
'YYYY-MM-DD'
AND au_subtyp3.eff_dte < =
TO_DATE
('2012-12-31',
'YYYY-MM-DD'
AND au_subtyp1.exists_flg = 'Y'
Explain Plan
Plan hash value: 2534321861
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 91 | | 33265 (2)| 00:06:40 |
|* 1 | FILTER | | | | | | |
| 2 | HASH GROUP BY | | 1 | 91 | | 33265 (2)| 00:06:40 |
|* 3 | HASH JOIN | | 1404K| 121M| 19M| 33178 (1)| 00:06:39 |
|* 4 | HASH JOIN | | 307K| 16M| 8712K| 23708 (1)| 00:04:45 |
| 5 | VIEW | VW_SQ_1 | 307K| 5104K| | 13493 (1)| 00:02:42 |
| 6 | HASH GROUP BY | | 307K| 13M| 191M| 13493 (1)| 00:02:42 |
|* 7 | INDEX FULL SCAN | AUSU_PK | 2809K| 125M| | 13493 (1)| 00:02:42 |
|* 8 | INDEX FAST FULL SCAN| AUSU_PK | 2809K| 104M| | 2977 (2)| 00:00:36 |
|* 9 | TABLE ACCESS FULL | AU_SUBTYP | 1404K| 46M| | 5336 (2)| 00:01:05 |
Predicate Information (identified by operation id):
1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
"AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
select au_id_k,pgm_struct_id_k from (
SELECT au_id_k
, pgm_struct_id_k
, ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
create_ts, eff_dte,exists_flg
FROM au_subtyp
WHERE create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
AND eff_dte <= TO_DATE('2012-12-31','YYYY-MM-DD')
) d where rn =1 and exists_flg = 'Y'
--Explain Plan
Plan hash value: 4039566059
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2809K| 168M| | 40034 (1)| 00:08:01 |
|* 1 | VIEW | | 2809K| 168M| | 40034 (1)| 00:08:01 |
|* 2 | WINDOW SORT PUSHED RANK| | 2809K| 133M| 365M| 40034 (1)| 00:08:01 |
|* 3 | TABLE ACCESS FULL | AU_SUBTYP | 2809K| 133M| | 5345 (2)| 00:01:05 |
Predicate Information (identified by operation id):
1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
VijayHi Justin,
Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
Both the production and test environment are 3 Node RAC.
First Query...
CPU used by this session 4740
CPU used when call started 4740
Cached Commit SCN referenced 21393
DB time 4745
OS Involuntary context switches 467
OS Page reclaims 64253
OS System time used 26
OS User time used 4562
OS Voluntary context switches 16
SQL*Net roundtrips to/from client 9
bytes received via SQL*Net from client 2487
bytes sent via SQL*Net to client 15830
calls to get snapshot scn: kcmgss 37
consistent gets 52162
consistent gets - examination 2
consistent gets from cache 52162
enqueue releases 19
enqueue requests 19
enqueue waits 1
execute count 2
ges messages sent 1
global enqueue gets sync 19
global enqueue releases 19
index fast full scans (full) 1
index scans kdiixs1 1
no work - consistent read gets 52125
opened cursors cumulative 2
parse count (hard) 1
parse count (total) 2
parse time cpu 1
parse time elapsed 1
physical write IO requests 69
physical write bytes 17522688
physical write total IO requests 69
physical write total bytes 17522688
physical write total multi block requests 69
physical writes 2139
physical writes direct 2139
physical writes direct temporary tablespace 2139
physical writes non checkpoint 2139
recursive calls 19
recursive cpu usage 1
session cursor cache hits 1
session logical reads 52162
sorts (memory) 2
sorts (rows) 760
table scan blocks gotten 23856
table scan rows gotten 2809607
table scans (short tables) 1
user I/O wait time 1
user calls 11
workarea executions - onepass 1
workarea executions - optimal 9
Second Query
CPU used by this session 1197
CPU used when call started 1197
Cached Commit SCN referenced 21393
DB time 1201
OS Involuntary context switches 8684
OS Page reclaims 21769
OS System time used 14
OS User time used 1183
OS Voluntary context switches 50
SQL*Net roundtrips to/from client 9
bytes received via SQL*Net from client 767
bytes sent via SQL*Net to client 15745
calls to get snapshot scn: kcmgss 17
consistent gets 23871
consistent gets from cache 23871
db block gets 16
db block gets from cache 16
enqueue releases 25
enqueue requests 25
enqueue waits 1
execute count 2
free buffer requested 1
ges messages sent 1
global enqueue get time 1
global enqueue gets sync 25
global enqueue releases 25
no work - consistent read gets 23856
opened cursors cumulative 2
parse count (hard) 1
parse count (total) 2
parse time elapsed 1
physical read IO requests 27
physical read bytes 6635520
physical read total IO requests 27
physical read total bytes 6635520
physical read total multi block requests 27
physical reads 810
physical reads direct 810
physical reads direct temporary tablespace 810
physical write IO requests 117
physical write bytes 24584192
physical write total IO requests 117
physical write total bytes 24584192
physical write total multi block requests 117
physical writes 3001
physical writes direct 3001
physical writes direct temporary tablespace 3001
physical writes non checkpoint 3001
recursive calls 25
session cursor cache hits 1
session logical reads 23887
sorts (disk) 1
sorts (memory) 2
sorts (rows) 2810365
table scan blocks gotten 23856
table scan rows gotten 2809607
table scans (short tables) 1
user I/O wait time 2
user calls 11
workarea executions - onepass 1
workarea executions - optimal 5Thanks,
Vijay
Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM -
Which is the Best way to upload BP for 3+ million records??
Hello Gurus,
we have 3+million records of data to be uploaded in to CRM coming from Informatica. which is the best way to upload the data in to CRM, which takes less time consumption and easy. Please help me.
Thanks,
Naresh.do with bapi BAPI_BUPA_FS_CREATE_FROM_DATA2
-
Best way to delete large number of records but not interfere with tlog backups on a schedule
Ive inherited a system with multiple databases and there are db and tlog backups that run on schedules. There is a list of tables that need a lot of records purged from them. What would be a good approach to use for deleting the old records?
Ive been digging through old posts, reading best practices etc, but still not sure the best way to attack it.
Approach #1
A one-time delete that did everything. Delete all the old records, in batches of say 50,000 at a time.
After each run through all the tables for that DB, execute a tlog backup.
Approach #2
Create a job that does a similar process as above, except dont loop. Only do the batch once. Have the job scheduled to start say on the half hour, assuming the tlog backups run every hour.
Note:
Some of these (well, most) are going to have relations on them.Hi shiftbit,
According to your description, in my opinion, the type of this question is changed to discussion. It will be better and
more experts will focus on this issue and assist you. When delete large number of records from tables, you can use bulk deletions that it would not make the transaction log growing and runing out of disk space. You can
take the table offline for maintenance, a complete reorganization is always best because it does the delete and places the table back into a pristine state.
For more information about deleting a large number of records without affecting the transaction log.
http://www.virtualobjectives.com.au/sqlserver/deleting_records_from_a_large_table.htm
Hope it can help.
Regards,
Sofiya Li
Sofiya Li
TechNet Community Support -
What is the best way to do voice recording in a Macbook Pro?
What is the best way to do voice recording in a Macbook Pro.I want to voice record and sendas a MP3 file.
ThanksDeleting the application from your /Applications folder is sufficient. There are sample projects in /Library/Application/Aperture you may want to get rid of as well, as they take up a fair bit of space.
-
What is the best way to record a video(not big) for free on a mac w/isight?
What is the best way to record a video(not big) for free on a mac w/isight? I need to just make a short (less than 5 minute, no special effects) video, but I can't figure out any programs that will make this possible for me.
Thanks.Hi Carolyn,
you can use iMovie to record a video for free on a Mac with iSight.
1. Open iMovie HD
2. Choose “Create a new project”
3. Name the project for example “iSight Movie” (first field). In the second field, let the default location “Sequences”, in the third one, video format, choose “iSight”. Push “Create”
4. iMovie window opens, next to the scissors, choose the camera (Camera mode), then iSight. Now you appear on the screen, right?
5. Press “Record with iSight” in the main picture window.
6. Press the same button to stop. Your new clip appears now in the clip panel at the right.
7. You are done!
iMac G5 PPC 2,1 Ghz Mac OS X (10.4.8) -
What is the best way to filter an IP from being blocked?
What is the best way to filter an IP from being blocked by a false positive? Event Action Filter?
I'll assume you really mean "blocked" as opposed to "denied". You can either create an event action filter and subtract the blocked action, or you can add the address to the "never block" addresses.
-
What is the best way to record a project from the timeline to an external recorder via firewire? Also, who makes a recorder that works with a mac and recordes in realtime? This is possible right?
While theoretically possible, sometimes the camera people disable recording back to tape from the computer due to DRM (digital rights managment) issues. They will allow tape to tape transfers however.
Test your process first is all I can advise.
x -
Best way to implement a shared Blocking Queue?
What's the best way to implement a shared Blocking Queue that multiple JVMs can enqueue objects in and multiple JVM's can dequeue from simultaneously?
Also, I see references on the web to com.tangosol.coherence.component.util.queue.ConcurrentQueue but I don't see it in the current API docs...
ThanksHi snidely_whiplash,
snidely_whiplash wrote:
What's the best way to implement a shared Blocking Queue that multiple JVMs can enqueue objects in and multiple JVM's can dequeue from simultaneously?
Also, I see references on the web to com.tangosol.coherence.component.util.queue.ConcurrentQueue but I don't see it in the current API docs...
ThanksThat class is an internal class, AFAIK.
As for implementing a queue, you might want to look at Ashish Srivastava's ezMQ component for some ideas:
http://ezsaid.blogspot.com/2009/01/implementing-jms-queue-on-top-of-oracle.html
Best regards,
Robert
Maybe you are looking for
-
Silly Podcast approval question
Ok this might be a silly question but when I submit a Podcast to my Podcast Producer server and choose to submit for approval it does the work and an email is sent to me telling me that it has been submitted for approval and the path to the files. Th
-
Function Module to Calculate Capacity per Shift.
Hello SAPients. I have to create a report where I have to distribute the production requirement (table KBED) in different shifts. I tried to calculate the values with normal calculation in programming but it's getting too complicated finding the righ
-
Why is Compressor way slower on 1 Mac Pro
This is about Compressor 3.5.3 but it's not really about the version. I have two identical MacPros 2,1 models both running OSX 10.6.8 and Final Cut 7.0.3 and Compressor 3.5.3. The only difference in the machine is one has 20 gigs of ram and the other
-
I have seen the sessionstore.js file. It has the tabs and windows information in it.The information is very unclear.Is that data in js is similar to places.sqlite. When i see the id number in js file it is not matching with any of the table in the mo
-
Palm Desktop Re-Install Error: "Networkaddress 0\ not accessible"
Hi All, after having to reinstall Palm Desktop (Version 4.1.0420 german) cause there have been some strange behaviors after reboots i'm fighting with this problem. I use Win XP SP3 on my desktop and use a USB connected Palm TX since appr. 2 years. Ev