How to Reduce delay in dropdowns in tables ?
Hi Experts,
If we use dropdowns in tables in WD, there is some delay when we select any value from its options.
Can anyone help me to remove the delay completely from dropdown?
I want it just like Excel application where we select some value and it appears there at that moment itself.
Thank you in advance !
Regards,
Anand
Hi Lekha,
Thanks or your reply...
I am having some code in modifyview method. But is there no way I can remove the delay of dropdown selection?
Something which will not make the application to trigger processing even if dropdown values are selected.
Regards,
Anand
Similar Messages
-
In a SQL query whihc has join, How to reduce Multiple instance of a table
in a SQL query which has join, How to reduce Multiple instance of a table
Here is an example: I am using Oracle 9i
is there a way to reduce no.of Person instances from the following query? or can I optimize this query further?
TABLES:
mail_table
mail_id, from_person_id, to_person_id, cc_person_id, subject, body
person_table
person_id, name, email
QUERY:
SELECT p_from.name from, p_to.name to, p_cc.name cc, subject
FROM mail, person p_from, person p_to, person p_cc
WHERE from_person_id = p_from.person_id
AND to_person_id = p_to.person_id
AND cc_person_id = p_cc.person_id
Thnanks in advance,
Babu.SQL> select * from mail;
ID F T CC
1 1 2 3
SQL> select * from person;
PID NAME
1 a
2 b
3 c
--Query with only ne Instance of PERSON Table
SQL> select m.id,max(decode(m.f,p.pid,p.name)) frm_name,
2 max(decode(m.t,p.pid,p.name)) to_name,
3 max(decode(m.cc,p.pid,p.name)) cc_name
4 from mail m,person p
5 where m.f = p.pid
6 or m.t = p.pid
7 or m.cc = p.pid
8 group by m.id;
ID FRM_NAME TO_NAME CC_NAME
1 a b c
--Expalin plan for "One instance" Query
SQL> explain plan for
2 select m.id,max(decode(m.f,p.pid,p.name)) frm_name,
3 max(decode(m.t,p.pid,p.name)) to_name,
4 max(decode(m.cc,p.pid,p.name)) cc_name
5 from mail m,person p
6 where m.f = p.pid
7 or m.t = p.pid
8 or m.cc = p.pid
9 group by m.id;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 902563036
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 3 | 216 | 7 (15)| 00:00:01 |
| 1 | HASH GROUP BY | | 3 | 216 | 7 (15)| 00:00:01 |
| 2 | NESTED LOOPS | | 3 | 216 | 6 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL| MAIL | 1 | 52 | 3 (0)| 00:00:01 |
|* 4 | TABLE ACCESS FULL| PERSON | 3 | 60 | 3 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
4 - filter("M"."F"="P"."PID" OR "M"."T"="P"."PID" OR
"M"."CC"="P"."PID")
Note
- dynamic sampling used for this statement
--Explain plan for "Normal" query
SQL> explain plan for
2 select m.id,pf.name fname,pt.name tname,pcc.name ccname
3 from mail m,person pf,person pt,person pcc
4 where m.f = pf.pid
5 and m.t = pt.pid
6 and m.cc = pcc.pid;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 4145845855
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 112 | 14 (15)| 00:00:01 |
|* 1 | HASH JOIN | | 1 | 112 | 14 (15)| 00:00:01 |
|* 2 | HASH JOIN | | 1 | 92 | 10 (10)| 00:00:01 |
|* 3 | HASH JOIN | | 1 | 72 | 7 (15)| 00:00:01 |
| 4 | TABLE ACCESS FULL| MAIL | 1 | 52 | 3 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL| PERSON | 3 | 60 | 3 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
| 6 | TABLE ACCESS FULL | PERSON | 3 | 60 | 3 (0)| 00:00:01 |
| 7 | TABLE ACCESS FULL | PERSON | 3 | 60 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("M"."CC"="PCC"."PID")
2 - access("M"."T"="PT"."PID")
3 - access("M"."F"="PF"."PID")
PLAN_TABLE_OUTPUT
Note
- dynamic sampling used for this statement
25 rows selected.
Message was edited by:
jeneesh
No indexes created... -
How to reduce the size of System tables(RS*) in SAP BW?
Hi All,
We need to reduce the size of a system tables(RS*) in SAP BW system without impacting anything to system.
Could you please let us know is there any Global program/Function module to do the same.
If not if you know any individual program or other way to reduce the system table size it will be very much useful.
Sample System tables(RS*) are given below.
RSHIENODETMP
RSBERRORLOG
RSHIENODETMP~0
RSBMNODES
RSBKDATA
RSBMNODES~001
RSRWBSTORE
RSBMLOGPAR
RSBERRORLOG~0Sudhakar,
There are tables you can archive / clean up and then there are tables you cannot do anything about. For example - if your system has a million queries - the RSRREPDIR , RZCOMPDIR tables will be large.
The tables that typically get archived are :
1. BALDAT / BALHDR - application log tables
2. Monitor tables - search for Request archiving which will tell you how to archive the same
The other tables -
First you would have to understand why they are large in the first place ... if you have too many hierarchies - then some tables can be huge - delete some of the hierarchies you do not need and the table sizes should come down.
RSRWBSTORE - this is the internal store for workbooks - this will have the last executed version of the workbook stored in the table. This information is called when the workbook is executed without refreshing the variables - which is why you get the workbook output first and then get prompted to refresh the variables. -
How to reduce time for replicating large tables?
Hi
Any suggestions on how to reduce the amount of time it takes to replicate a large table when it is first created?
I have a table with 150 million rows in it, and it takes forever to start the replication process even if I run it in parallel, and I cant afford the downtime.What downtime are you referring to? The primary doesn't need to be down when you're setting up replication and you're presumably still in the process of doing the initial configuration on the replicated database, so it's not really down, it's just not up yet.
Justin -
How to reduce size of SQL server table?
Hi experts,
I have a table with 99.9% of unused size(9GB). How do I reduce the size?
I have tried those commands below but they do not work.
ALTER INDEX [VBDATA~0] ON qa2.VBDATA REBUILD
DBCC CLEANTABLE (QA2,"qa2.VBDATA", 0)
WITH NO_INFOMSGS;
GO
ALTER INDEX ALL ON qa2.VBDATA REBUILDHi deepakkori,
Thanks for your help. I have found the solution.
"Also other option which you could try is :
ALTER INDEX Index_name on Table_Name REORGANIZE WITH (LOB_COMPACTION=ON)"
you can check full discussion here:
http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/68af32bb-eefa-494a-b62d-1ebd1387d105 -
How to reduce size of a huge table in Access
We have an Access mdb file that is getting too large. It is about 1.7 GB, and will probably hit 2.0 GB in two or three months. It has already been compacted. There are only two tables, and 99% is from just one of the two tables, which has
about 8.4 million records. What are some strategies I could use to reduce the file size? I've already done the database splitting strategy. This is one of three back-end files in the split set. I think some users might have Access 2003,
so if you have a solution that is unique to Access 2007/2010, I'm not sure if it will be viable or not.Here it is. Your question prompted me to change the datatype on Account and Fiscal Year, which were previously Text. That reduced the size by 0.1 GB, but I definitely need to make further improvement.
Company Code
Text
Cost Center
Text
Account
Long Integer
Item Number
Text
Item Description
Text
Charge Code
Text
Qty
Double
Unit of Measure
Text
Cost Per Unit of Measure
Double
Total Expense
Double
PO Number
Text
Vendor Name
Text
Vendor Number
Text
Source Type
Text
Fiscal Year
Integer
Fiscal Period
Long Integer
Journal Type
Text
Transaction Date
Date/Time -
How to reduce delay caused by coding?
Hi,
I am fairly new to C++ programming and using DAQ boards. I've been browsing this board for awhile and let me first thank everyone who has contributed - most threads were very helpful.
However this problem I have right now didn't come up in any searches so hopefully someone could help me out.
I am using the NI-6115 PCI DAQ board and I am trying to write a simple C program that continuously counts the number of input pulses to a counter within a time period. The program also has to continuously output the result onto the screen.
So basically this is what my code looks like:
while( 1 ) {
DAQmxErrChk (DAQmxReadCounterScalarU32(taskHandle,10.0,&data,NULL));
count = data - data1;
if ((long)count < 0)
(uInt32)count = data + 16777216 - data1;
data1 = data;
printf("\rCount: %d. Press Ctrl+C to interrupt. ",count);
fflush(stdout);
DAQmxErrChk (DAQmxWaitForNextSampleClock(taskHandleCtr,10.0,0));
The sample clock uses another counter's output pulse train as the source.
The problem is, the desired execution time of each iteration is around 40 - 50 microseconds. With the above coding, I could only achieve 800 microseconds per iteration. If I set the sample clock any faster, an error will occur telling me that two sample clock pulses have occured before the "DAQmxWaitForNextSampleClock" instruction is reached.
Would it be the computer that I am running the program on? The speed of the DAQ board? Or just poor programming technique on my part?
Thanks in advance
Howard
Message Edited by HT156 on 05-14-2009 06:17 PM
Solved!
Go to Solution.Thanks for you last reply David.
Since switching to a new OS is not an option for me, I have decided to reconsider my program logic in order to achieve an even faster time.
I have decided to store the read counter values first then do the calculations later. Also, I have omitted the part of continuously printing the results onto the screen. Here's what remains of my code, which is incomplete.
while( 1 ) {
DAQmxErrChk (DAQmxReadCounterScalarU32(taskHandle,10.0,&data,NULL));
DAQmxErrChk (DAQmxWaitForNextSampleClock(taskHandleCtr,10.0,0));
However even with those two lines of codes I am only able to get down to 350 microseconds per iteration before WaitForNextSampleClock returns an error. I was wondering if there is a faster read function to read the counter value than DAQmxReadCounterScalarU32? And is there a way to bypass the CPU and store the counter values direct onto the DAQboard? -
How to minimize I/O in a Table
Hi,
I have this table in a Fin (PeopleSoft) database, it's size is 11 million rows.
Actually, the thing is anytime we run query or billing on this table it generates a lot of I/O and it's the biggest table and the most important in the database.
Could someone gimme some advice on how to reduce I/O on this table?
Thanks.
Regards!
Texas!You can join the corresponding v$ views for current time explain plan.
Alternately, you can use the following query. Add the where clause to filter by snap_id and /or object_owner. The Opeartion column will show you the FTS.
Select
trim(sql_id),
trim(id),
trim(plan_hash_value),
operation||' '||options||decode(id, 0, substr(optimizer,1, 10)||' Cost='||to_char(cost)) "Operation",
object_name,
object_owner,
cost,
cardinality,
round(bytes/1024) kbytes
from DBA_HIST_SQL_PLAN natural join dba_hist_snapshot Natural join dba_hist_sqlstat -
Urgent HELP - How to reduce the number of entries in table MESYBODY?
Does anyone know how to reduce the number of records in synchronization table MESYBODY and MESYHEAD? The reason I wanted to reduce these tables is because I believe they have so many duplicated entries that causes extremely slow performance everytime we sync. Currently, there are almost 50,000 records for approximately 100 users, and I believe it shouldn't be that much.
We are running into a problem that it takes approximately 25-30 minutes to sync.
Any advices would be highly appreciated.
Regards,
DaiHi Dai,
please try to run the middleware job WAF_MW_MAPPING. For some applications (Mobile Time and Travel for example) this helps to clean the tables.
Br, alex
alexander ilg
http://www.msc-mobile.com -
How to Reduce Clusetering Factor on Table?
I am seeing a very high clustering factor on an SDO geometry table in our 10g RAC DB on our Linux boxes. This slow performance is repeateable on othe r Linux as well as Solaris DBs for the same table. Inserts go in at a rate of 44 milliseconds per insert and we only have about 27000 rows in the table. After viewing a VERY slow insert of about 600 records into this same table, I saw the clustering factor in OEM. The clustering factor is nearly identical to the # rows in the table indicating that useability of the index is fairly low now. I have referenced Metalink Tech Note 223117.1 and, while it affirms what I've seen, I am still trying to determine how to reduce the Clustering Factor. The excerpt on how to do this is below:
"The only method to affect the clustering factor is to sort and then store the rows in the table in the same order as in they appear in the index. Exporting rows and putting them back in the same order that they appeared originally will have no affect. Remember that ordering the rows to suit one index may have detrimental effects on the choice of other indexes."
Sounds great, but how does one actually go about storing the rows in the table in the same order as they appear in the index?
We have tried placing our commits after the last insert as well as after every insert and the results are fairly neglible. We also have a column of type SDE.ST_GEOMETRY in the table and are wondering if this might also be an issue. Thanks in advance for any help.
Matt SauterJoel is right that the clustering factor is going to have absolutely no effect on the speed of inserts. The clustering factor is merely one, purely statistical, factor the optimiser makes use of to determine how to perform a SELECT statement (i.e., do I bother to use this index or not for row retrieval). It's got nothing to do with the efficiency of inserts.
If I were you, I'd be looking at factors such as excessive disk I/O taking place for other reasons, inadequate buffer cache and/or enqueue and locking issues instead.
If you're committing after every insert, for example, then redo will have to be flushed (a commit is about the only foreground wait event -i.e., one that you get to experience in real time- that Oracle has, so a commit after every insert's really not a smart idea). If your redo logs are stored on, say, the worst-performing disk you could buy that's also doing duty as a fileserver's main hard disk, then LGWR will be twiddling its thumbs a lot! You say you've tested this, and that's fine... I'm just saying, it's one theoretical possibility in these sorts of situations. You still want to make sure you're not suffering any log writer-related waits, all the same.
Similarly, if you're performing huge reads on a (perhaps completely separate) table that is causing the buffer cache to be wiped every second or so, then getting access to your table so your inserts can take place could be problematic. Check if you've got any database writer waits, for example: they are usally a good sign of general I/O bottlenecks.
Finally, you're on a RAC... so if the blocks of the table you're writing to are in memory over on another instance, and they have to be shipped to your instance, you could have high enqueue waits whilst that shipment is taking place. Maybe your interconnect is not up to the job? Maybe it's faulty, even, with significant packet loss along the way? Even worse if someone's decided to switch off cache fusion transfer for the datafiles invoved (for then block shipment happens by writing them to disk in one instance and reading from disk in the other). RAC adds a whole new level of complexity to things, so good luck tracking that lot down!!
Also, maybe you're using Freelists and Freelist groups rather than ASSM, so perhaps you're fighting for access to the freelist with whatever else is happening on your database at the time...
You get the idea: this could be a result of activity taking place on the server for reasons completely unconnected with your insert. It could be a feature of Spatial (with which not many people will be familiar, so good luck if so!) It could be a result of the way your RAC is configured. It could be any number of things... but I'd be willing to bet quite a bit that it's got sod-all to do with the clustering factor!
You'll need to monitor the insert using a tool like Insider or Toad so you can see if waits and so on happen, more or less in real time -or start using the built-in tools like Statspack or AWR to analyze your workload after it's completed- to work out what your best fix is likely to be. -
How to Reduce cost of full table scan or remove full table scan while execu
Dear Experts
need your help.
I execute a query and create a explain plan in that plan i found cost of a table is very high (2777) and it was full table scan.
Please guide me How to Reduce cost of full table scan or remove full table scan while execute the query.
ThanksNeed your help to tune this query..
SELECT DISTINCT ool.org_id, ool.header_id, ooh.order_number, ool.line_id,
ool.line_number, ool.shipment_number,
NVL (ool.option_number, -99) option_number, xcl.GROUP_ID,
xcl.attribute3, xcl.attribute4
FROM oe_order_headers ooh,
xxcn_comp_header xch,
xxcn_comp_lines xcl,
fnd_lookup_values_vl fvl,
oe_order_lines ool
WHERE 1 = 1
AND ooh.org_id = 1524
AND xch.src_ref_no = TO_CHAR (ooh.order_number)
AND xch.src_ref_id = ooh.header_id
AND xch.org_id = 1524
AND xcl.header_id = xch.header_id
AND ool.line_id = xcl.oe_line_id
AND ool.flow_status_code IN
('WWD_SHIPPED',
'FULFILLED',
'SHIPPED',
'CLOSED',
'RETURNED'
AND ool.org_id = 1524
AND ool.header_id = ooh.header_id
AND xch.org_id = 1524
AND fvl.lookup_type = 'EMR OIC SOURCE FOR OU'
AND fvl.tag = '1524'
AND fvl.description = xch.SOURCE
AND EXISTS (
SELECT 1
FROM oe_order_lines oe
WHERE oe.header_id = ool.header_id
AND oe.org_id = 1524
AND oe.line_number = ool.line_number
AND oe.ordered_item = ool.ordered_item
AND oe.shipment_number > ool.shipment_number
AND NVL (oe.option_number, -99) =
NVL (ool.option_number,
-99)
AND NOT EXISTS (
SELECT 1
FROM xxcn_comp_lines xcl2
WHERE xcl.GROUP_ID = xcl2.GROUP_ID
AND oe.line_id = oe_line_id))
call count cpu elapsed disk query current rows
Parse 1 0.07 0.12 12 25 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 103.03 852.42 176206 4997766 0 12
total 4 103.10 852.55 176218 4997791 0 12
In this LIO is very high...can u please help in resolving this performance issue -
How to populate values in to dropdown in table ui element
Hi,
according to my scenario i have atable with five records ...andi have acolumn name DATE and it contains 5 dropdowns with some values i.e dates from jan 1 2008-dec 31 2008.user needs to select only those values which are in dropdown. can u tell me the code to populate values in to dropdown in table UI element.
Thanks
RajuHi,
you can go for two drop downs like DropDown by Key or Drop Down by Index as per requirment.
Create context Node for the table UI, in that one will be for ur drop down. Create element for the Context node and add thses to the conetxt.
Code example for DropDownBy Key:-
ISimpleType simpleType = wdContext .nodeProjEstiTable().getNodeInfo()
.getAttribute("projphasname") .getModifiableSimpleType();
IModifiableSimpleValueSet svs1 =
simpleType.getSVServices().getModifiableSimpleValueSet();
svs1.clear();
for (int j = 0; j < projphasname.length; j++) {
svs1.put(projphasname[j][1], projphasname[j][1]);
for DropDownBy Index you can work in normal way means try to create element for the respective context attribute.
Hope this may help you...
Deepak -
Update New Records is taking much time to complete. How to reduce
Hi,
Iam having a Table with 200 Clumns and trying to Update 5 columns and the table has 5lakh Records. It is taking very much time 2hours to complete. Pls let me know why is this taking time how to reduce....
In my ssis Package iam using
Oracle Source
Look up
Oledb Command for the Update.
Please help ism stuck?I have somthing like this..
Update table
Set column1 =@column1, column2=@column2,column3=@column3,column4=@column4,column5=@column5
where column1=@column1
in this case i need to add a index on column1 right? Pls llet me know
Yes, an index in column1 (preferably clustered) would avoid the table scan for each update. The ELT the staging table alternative Jim suggested will likely perform better than individual updates for a large process like this.
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
How to delete the entries in CDCLS table
Hi All,
There is a potential issue with the size of table CDCLS on production system. We need to reduce or manage the amount of data in this table. CDCLS is a cluster table and cannot be reduced directly,
So first we were reduced the entries in CDHDR and then from CDPOS tables.
After that we tried to reduce the entries from CDCLS table through the program - RSSCD7RE, We are facing the error as - no data available and No active archive key was selected.
Please help me out, how to proceed further in this task. Thanks in advance.Go to DB15 and see the archiving object.
You can also use the archiving object CHANGEDOCU, please refer SAP documents for that.
Transaction is
SARA -> Archive object CHANGEDOCU
Message was edited by:
ANIRUDDHA DAS -
How to reduce buffer busy waits, session hanging due to buffer busy waits
Hi,
How to reduce buffer busy waits, session hanging due to buffer busy waits.
Thanks,
Sathis.When I see through enterprise manager I see lot of
tables with buffer busy waits.
Is there any way by table name we can check the
blocks info.
The simple way is to look at the SQL statement and corresponding table name?
P1=file#, P2=block#. You can extract segment name(table or index) using this info.
Query v$bh like following:
SQL> select file#, block#, class#, objd from v$bh where file# = P1 and block# = P2;
SQL> select object_name from all_objects where object_id = <objd>;See following doc:
http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_1051.htm
Or you can dump block:
SQL> alter system dump datafie <P1> block <P2>;Some excerpts from block dump:
scn: 0x07df.17e70782 seq: 0x01 flg: 0x04 tail: 0x07822301
frmt: 0x02 chkval: 0x61d0 type: 0x23=PAGETABLE SEGMENT HEADER
Map Header:: next 0x00000000 #extents: 1 obj#: 55881 flag: 0x10000000>
Can we do something at table level that will reduce
the waits.
Yes, some methods are known. But before thinking of that, you must verify which block class and which access are involved.
Typo... always. :(
Message was edited by:
Dion_Cho
Maybe you are looking for
-
Crystal report Using Push Method (OutOfMemoryException)
Hello, i am developping reports using Sap Crystal reports , i am using the push method ( which uses a DataSet for binding informations with the reports) , and i want to display a large data, but i m getting an OutOfMemoryException, because of using t
-
IPod with Color will not connect to G5
I recently bought a used iPod with Color display with a 60 GB drive for use in the car. I have a BMW with the cable that accommodates an iPod. I plugged it into the G5 with all my music on it and.... nothing. I tried plugging it into my work PC and i
-
Ifweb60 processes run as local system account on w2k- how do i change?
i am running forms 6i on an 2000 box using the forms servlet config and oc4j with 9ias. this runs fine except that the ifweb60 processes are owned by the local system account. this in turn means i can't map the forms60_path to a network drive because
-
How to insmod modules when Arch startups.
Hello everyone , I'm new in Arch, can you help me? I want to know how to insmod modules when Arch startups.
-
Can you run windows on a mac without a partition
i want to run windows on my mac but i do not want to have a partition. is there any way to run windows without a partition??????