A way to reuse a '?' in a query ?
Hello,
Suppose I have this query:
SELECT x FROM t WHERE t.a=? AND t.b=? AND t.c=?
But I need to have some felxibility on the order of the bind variables, and the ability to reuse them. Something like:
SELECT x FROM t WHERE t.a=?2 AND t.b=?1 AND t.c=?2
(In this case, we reuse the recond '?', and we put an arbitrary order).
It allows flexibility. So we don't have to change the java code (if it is necessary to change the order of calling the different Statement.setXXX()).
I made a wrapping in the Statement object my self to resolve this ussue by preparsing these '?n' where 'n' is the parameter number n, but I don't like the idea of wrapping every stuff.
Anybody has an idea on this ?
Thanks
Riad
Shuffling around bound parameters is a reasonably low-level concept. The easiest way around it is to execute the query dynamically, as in "select x from" + sTableName + " where " + x + " = " + y.... rather than using bind parameters.
slag
Similar Messages
-
Is there any way to prevent the OS from querying the Superdrive when start
Just a random question. Whenever my MacBook Pro starts up (either from sleep or a complete/fresh start, the OS queries the drive slot to see if there is a disc present. While this may be normal behavior, it seems to slow down the start up process. (I rarely ever have a disc in the drive). Just curious if this, in fact, normal, or if there's something awry.
Is there any way to prevent the OS from querying the Superdrive when start
No
he OS queries the drive slot to see if there is a disc present.
How can you tell? Based on the noise it makes? -
Is there a way to not to execute a query on table rendering?
Is there a way to not to execute a query associated with an af:table on table rendering on the page?
I would like to control this programmatically.
Any examples will be much appreciated.Yes, there is.
{thread:id=2362354} -
Is there a way to have more than one Query view in the same workbook?
Is there a way to have more than one Query view in the same workbook?
BEx allows us to insert Queries into workbooks, but not saved views.
I can open a view in excel & then save it as a workbook....but after that there is no way to add another view to the same excel. If I open a new view, it opens in a new Excel.Hi
when u open some query in BEx analyser,u can save it as a query view as well as workbook.The difference is, workbook is if u work with the design mode which will be in the left side of ur screen and query view is wen u save after drill downing some char or key figures from a intial screen wen u execute query in BEx analyser -
A way to reuse existing classes instead of generating the stub ones?
Hello to all,
I am using eclipse and weblogic 10.3 as an application server.
I have a 1st project deployed with some exposed web services. I need to access these services from a 2nd project, so i run the clientgen ant task, which generates the client interface, along with some stub classes. These stub classes are basically a copy of the ones from the 1st project.
My question is this:
Is there a way to reuse the original objects that the 1st project is using, by putting the first project as a dependency on the second? Or do i have to use the generated stub classes?
Thanks in advance! Any help is appreciated.hi raja,
no, DB6CONV cannot reuse the existing compression dictionary - this is in general not possible.
BUT: the good news is, that the next version V5 of DB6CONV will (amongst other new features) handle compression in a much better way! like R3load and online_table_move the compression dictionary will then be created based on (if possible) 20MB of sampled data ensuring optimal compression.
this new version will become generally available within the next few weeks.
regards, frank -
Is there any way we can simplify this update query ?
is there any way we can simplify this update query ? There is nothing wrong with the query ,but it looks so clumsy ...is there any other ways of doing this update like using with clause or exists or any other?
[code]
UPDATE STG_TMP_MBS_POOL s SET s.instrument_id = case when ( (select distinct iai.alternate_id from instrument_alternate_id iai, STG_TMP_MBS_POOL s where s.fi_instrument_id=iai.fi_instrument_id and iai.alternate_id_type_code IN ( 'FMR_CUSIP','CUSIP')) > 1) then (select distinct iai.alternate_id from instrument_alternate_id iai, STG_TMP_MBS_POOL s where s.fi_instrument_id=iai.fi_instrument_id and iai.alternate_id_type_code = 'FMR_CUSIP') else (select distinct iai.alternate_id from instrument_alternate_id iai, STG_TMP_MBS_POOL s where s.fi_instrument_id=iai.fi_instrument_id and iai.alternate_id_type_code IN ('FMR_CUSIP', 'CUSIP')) END;
[\code]update stg_tmp_mbs_pool s
set s.instrument_id = case when (select distinct iai.alternate_id
from instrument_alternate_id iai,
stg_tmp_mbs_pool s
where s.fi_instrument_id = iai.fi_instrument_id
and iai.alternate_id_type_code in ('FMR_CUSIP','CUSIP')
) > 1
then (select distinct iai.alternate_id
from instrument_alternate_id iai,
stg_tmp_mbs_pool s
where s.fi_instrument_id = iai.fi_instrument_id
and iai.alternate_id_type_code = 'FMR_CUSIP'
else (select distinct iai.alternate_id
from instrument_alternate_id iai,
stg_tmp_mbs_pool s
where s.fi_instrument_id = iai.fi_instrument_id
and iai.alternate_id_type_code in ('FMR_CUSIP','CUSIP')
end
Maybe
begin
update stg_tmp_mbs_pool s
set s.instrument_id = (select distinct iai.alternate_id
from instrument_alternate_id iai,
stg_tmp_mbs_pool s
where s.fi_instrument_id = iai.fi_instrument_id
and iai.alternate_id_type_code in ('FMR_CUSIP','CUSIP')
update stg_tmp_mbs_pool s
set s.instrument_id = (select distinct iai.alternate_id
from instrument_alternate_id iai,
stg_tmp_mbs_pool s
where s.fi_instrument_id = iai.fi_instrument_id
and iai.alternate_id_type_code = 'FMR_CUSIP'
where s.instrument_id > 1;
end;
Regards
Etbin -
Any way to reuse username.seen and preserve Read flags
After migrating from a 10.3 Server to a 10.5 Server, what I find is that the "Read/Unread" flags have been reset.
But cyrus keeps these records in the username.seen files.
Is there any way to reuse these files for the read/unread flags to stick through migration?
Message was edited by: Celia WessenHi,
Use 'FORMAT INTENSIFIED ON' before your write statements.
Links:
http://sap-img.com/ab027.htm
https://forums.sdn.sap.com/click.jspa?searchID=16122441&messageID=5116798
Regards,
Harish -
Is there a way to reuse a region in multiple pages?
Is there a way to reuse a region in multiple pages?
For future maintenance, it would be easier to make changes in just one region. Maintaining multiple copies of a region is more work and eventually someone will forget to change one of the regions.
Thank you.Hi wolfv
Create your region on page 0 and then conditionally set the display of the region for specific pages in you application only.
Under the region's "condition type" attribute select " Current page is Contained within Expression 1 (comma delimited list of pages)" and in Expression 1 put 1,2,4,6,99 etc or what ever pages you want it to appear on.
This works especially well for tree regions.
regards
Paul P -
Please help me what other way i can tune this select query..
Hello Guru,
I have a select query which retrieve data from 10 tables and around 4 tables having 2-4 Lac record and rest are having 80,000 - 1 Lac record.
It is taking around 7-8 seconds to fetch 55000 record.
I was strictly told by the client that i should not use HINTS in my query. My query is below. Please help me what other way i can tune this select query..
select
CT.CUST_ID
,CT.ROMANISED_SURNAME
,CT.SURNAME
,CT.ROMANISED_GIVEN_NAME
,CT.GIVEN_NAME
,CT.ROMANISED_MIDDLE_NAME
,CT.MIDDLE_NAME
,CT.ROMANISED_NAME_SUFFIX
,CT.NAME_SUFFIX
,CT.ROMANISED_TITLE
,CT.TITLE
,CT.ROMANISED_NAME_INITIALS
,CT.NAME_INITIALS
,CT.NAME_TEXT
,CT.CUST_JRNY_ID
,RK.REMARK_TYPE
,RK.REMARK_ID+CT.CUST_ID as REMARK_ID
,RK.REMARK_STATUS
,RK.REMARK_TEXT
,RK.HOST_ONLY_IND
,RK.SUPERVISORY_IND
,RK.CUST_COMM_IND
,RK.REMARK_SEQ
,RK.REMARK_CODE
,RK.DEFAULT_CUST_REL_IND
,RK.DEFAULT_FLIGHT_SEG_REL_IND
,RK.IATA_CODE
,RK.ICAO_CODE
,CJ.RECORD_LOCATOR "SITA_RECORD_LOCATOR"
,Cjv.Record_Locator "ORIGINATOR_RECORD_LOCATOR"
,FS.TRAVELLING_GROUP_CODE
,CG.GROUP_NAME
FROM FLIGHT_LEG FL
,CUST_FLIGHT_LEG CFL
,CUST CT
,CUST_REMARK CTR
,REMARK RK
,FLIGHT_SEG_FLIGHT_LEG FSFL
,FLIGHT_SEG FS
,CUST_JRNY CJ
,CUST_JRNY_VERSION CJV
,CUST_GROUP CG
WHERE FL.OPR_FLIGHT_NUMBER = 1--I_OPR_FLIGHT_NUMBER
and FL.HISTORY_VERSION_NUMBER = 0
and FL.DEPARTURE_STATION_CODE = 'DEL'--I_DEPARTURE_STATION_CODE
and FL.DEPARTURE_DATETIME = TO_DATE('10-DEC-2012 18.45.00', 'DD-MON-YYYY HH24.MI.SS')
and FL.OPR_SERVICE_PROVIDER_CODE= 'AI'--i_opr_service_provider_code
and FL.OPR_FLIGHT_SUFFIX = 'A'--NVL(I_OPR_FLIGHT_SUFFIX, FL.OPR_FLIGHT_SUFFIX)
AND FL.FLIGHT_LEG_ID = CFL.FLIGHT_LEG_ID
AND CFL.CUST_ID = CT.CUST_ID
AND FL.FLIGHT_LEG_ID=FSFL.FLIGHT_LEG_ID
AND FSFL.FLIGHT_SEG_ID=FS.FLIGHT_SEG_ID
AND CT.CUST_ID = CTR.CUST_ID(+)
AND CTR.REMARK_ID = RK.REMARK_ID(+)
AND FL.CUST_JRNY_ID = CJ.CUST_JRNY_ID
and CJ.CUST_JRNY_ID = CJV.CUST_JRNY_ID
AND CG.CUST_JRNY_ID(+) = CT.CUST_JRNY_ID
AND CFL.HISTORY_VERSION_NUMBER = 0
AND CT.HISTORY_VERSION_NUMBER = 0
AND NVL(CTR.HISTORY_VERSION_NUMBER,0) = 0
AND NVL(RK.HISTORY_VERSION_NUMBER,0) = 0
AND FS.HISTORY_VERSION_NUMBER = 0
AND FSFL.HISTORY_VERSION_NUMBER = 0
-- AND CJ.HISTORY_VERSION_NUMBER = 0
and CJV.VERSION_NUMBER = 0 --- Need to check
AND NVL(CG.HISTORY_VERSION_NUMBER,0) = 0
order by CT.CUST_JRNY_ID,CT.CUST_ID;
The Tables having record:
select COUNT(*) from FLIGHT_LEG -----241756
select COUNT(*) from CUST_FLIGHT_LEG---632585
select COUNT(*) from CUST---240015
select COUNT(*) from CUST_REMARK---73724
select COUNT(*) from REMARK---73654
select COUNT(*) from FLIGHT_SEG_FLIGHT_LEG---241789
select COUNT(*) from FLIGHT_SEG----260004
select COUNT(*) from CUST_JRNY----74288
select COUNT(*) from CUST_JRNY_VERSION----74477
select COUNT(*) from CUST_GROUP----55819
Thanks,
HP..Plan hash value: 3771714931
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 10239 | 2949K| | 7515 (1)| 00:01:31 | | |
| 1 | SORT ORDER BY | | 10239 | 2949K| 3160K| 7515 (1)| 00:01:31 | | |
|* 2 | HASH JOIN | | 10239 | 2949K| | 6864 (1)| 00:01:23 | | |
| 3 | PARTITION HASH ALL | | 73687 | 1079K| | 417 (1)| 00:00:06 | 1 | 512 |
|* 4 | TABLE ACCESS FULL | CUST_JRNY_VERSION | 73687 | 1079K| | 417 (1)| 00:00:06 | 1 | 512 |
|* 5 | HASH JOIN | | 10239 | 2799K| | 6445 (1)| 00:01:18 | | |
| 6 | PARTITION HASH ALL | | 73654 | 863K| | 178 (1)| 00:00:03 | 1 | 512 |
| 7 | TABLE ACCESS FULL | CUST_JRNY | 73654 | 863K| | 178 (1)| 00:00:03 | 1 | 512 |
|* 8 | FILTER | | | | | | | | |
|* 9 | HASH JOIN RIGHT OUTER | | 10239 | 2679K| | 6267 (1)| 00:01:16 | | |
| 10 | PARTITION HASH ALL | | 55315 | 756K| | 137 (1)| 00:00:02 | 1 | 512 |
| 11 | TABLE ACCESS FULL | CUST_GROUP | 55315 | 756K| | 137 (1)| 00:00:02 | 1 | 512 |
|* 12 | FILTER | | | | | | | | |
|* 13 | HASH JOIN OUTER | | 10240 | 2540K| 2056K| 6129 (1)| 00:01:14 | | |
|* 14 | FILTER | | | | | | | | |
|* 15 | HASH JOIN RIGHT OUTER | | 10242 | 1930K| | 5531 (1)| 00:01:07 | | |
| 16 | INDEX FAST FULL SCAN | CUST_REMARK_PK | 73677 | 935K| | 190 (0)| 00:00:03 | | |
|* 17 | HASH JOIN | | 10257 | 1802K| | 5339 (1)| 00:01:05 | | |
|* 18 | HASH JOIN | | 10257 | 701K| | 3516 (1)| 00:00:43 | | |
|* 19 | HASH JOIN | | 3963 | 220K| | 2476 (1)| 00:00:30 | | |
|* 20 | HASH JOIN | | 3963 | 181K| | 1300 (1)| 00:00:16 | | |
| 21 | PARTITION HASH ALL | | 3963 | 131K| | 728 (1)| 00:00:09 | 1 | 512 |
|* 22 | TABLE ACCESS FULL | FLIGHT_LEG | 3963 | 131K| | 728 (1)| 00:00:09 | 1 | 512 |
|* 23 | INDEX FAST FULL SCAN| FLIGHT_SEG_FLIGHT_LEG_PK | 240K| 3059K| | 571 (1)| 00:00:07 | | |
| 24 | PARTITION HASH ALL | | 259K| 2531K| | 1175 (1)| 00:00:15 | 1 | 512 |
|* 25 | TABLE ACCESS FULL | FLIGHT_SEG | 259K| 2531K| | 1175 (1)| 00:00:15 | 1 | 512 |
| 26 | PARTITION HASH ALL | | 631K| 8011K| | 1037 (1)| 00:00:13 | 1 | 512 |
|* 27 | TABLE ACCESS FULL | CUST_FLIGHT_LEG | 631K| 8011K| | 1037 (1)| 00:00:13 | 1 | 512 |
| 28 | PARTITION HASH ALL | | 239K| 25M| | 1822 (1)| 00:00:22 | 1 | 512 |
|* 29 | TABLE ACCESS FULL | CUST | 239K| 25M| | 1822 (1)| 00:00:22 | 1 | 512 |
| 30 | PARTITION HASH ALL | | 73623 | 4385K| | 243 (1)| 00:00:03 | 1 | 512 |
| 31 | TABLE ACCESS FULL | REMARK | 73623 | 4385K| | 243 (1)| 00:00:03 | 1 | 512 |
Predicate Information (identified by operation id):
2 - access("CJ"."CUST_JRNY_ID"="CJV"."CUST_JRNY_ID")
4 - filter("CJV"."VERSION_NUMBER"=0)
5 - access("FL"."CUST_JRNY_ID"="CJ"."CUST_JRNY_ID")
8 - filter(NVL("CG"."HISTORY_VERSION_NUMBER",0)=0)
9 - access("CG"."CUST_JRNY_ID"(+)="CT"."CUST_JRNY_ID")
12 - filter(NVL("RK"."HISTORY_VERSION_NUMBER",0)=0)
13 - access("CTR"."REMARK_ID"="RK"."REMARK_ID"(+))
14 - filter(NVL("CTR"."HISTORY_VERSION_NUMBER",0)=0)
15 - access("CT"."CUST_ID"="CTR"."CUST_ID"(+))
17 - access("CFL"."CUST_ID"="CT"."CUST_ID")
18 - access("FL"."FLIGHT_LEG_ID"="CFL"."FLIGHT_LEG_ID")
19 - access("FSFL"."FLIGHT_SEG_ID"="FS"."FLIGHT_SEG_ID")
20 - access("FL"."FLIGHT_LEG_ID"="FSFL"."FLIGHT_LEG_ID")
22 - filter("FL"."DEPARTURE_STATION_CODE"='DEL' AND "FL"."DEPARTURE_DATETIME"=TO_DATE(' 2012-12-10 18:45:00', 'syyyy-mm-dd
hh24:mi:ss') AND "FL"."OPR_SERVICE_PROVIDER_CODE"='AI' AND "FL"."OPR_FLIGHT_NUMBER"=1 AND "FL"."OPR_FLIGHT_SUFFIX"='A' AND
"FL"."HISTORY_VERSION_NUMBER"=0)
23 - filter("FSFL"."HISTORY_VERSION_NUMBER"=0)
25 - filter("FS"."HISTORY_VERSION_NUMBER"=0)
27 - filter("CFL"."HISTORY_VERSION_NUMBER"=0)
29 - filter("CT"."HISTORY_VERSION_NUMBER"=0) -
Is there a way to have your PS script Query the name of your local Mail Server?
Hey Guys,
Really hoping someone can help me out here. I am currently working on a script that Automates the entire user creation process (AD account, Exchange Mailbox, UCM Soft phone). I want this script to be universal for all systems I have to work with.
By this I mean that I can run the script on various systems (Different Domains) without ever having to modify the code.
I have run into a bit of a problem while trying to automate the PSSession component of my script. I would like to be able to run he script and it inserts the relevant Mail server name based on some sort of Lookup or query. I am currently circumventing
the issue by having a text box pop up asking for the mail server name. It works, but its not really what I want for the end product. Its frustrating seeing as I have overcome similar issues for things like UPNs, AD Server, etc....
I have though about having it pull from the DNS MX records, but there are a number of different records for different Mail servers.
Any help would be awesome!Notice that this has drifted a whole long way from the original question. I pointed out that it is easy with Exchange but if the servers are in different companies or are not part of an enterprise deployment then you need to look at the local AD and
seek to the top. Any one mailbox wwil get you an exchange server to remote into and use the exchange shell to discover the network.
¯\_(ツ)_/¯
Looking back over the thread, that may be my fault. I mistook the comment about checking MX to mean he's just looking for a mail relay.
[string](0..33|%{[char][int](46+("686552495351636652556262185355647068516270555358646562655775 0645570").substring(($_*2),2))})-replace " " -
Is there a way to use REST service to query data from a forms collection?
I want to query and retrieve data from a SharePoint forms collection. I have a forms library that has multiple documents all being created using the same template.
I need to query and retrieve data from it using oData/ReST API.
I could see the /_vti_bin/listdata.svc and it seems I cannot get the forms data using this.
What will be the best approach?
Does that helps?
If you found this post helpful, please “Vote as Helpful”. If it answered your question, please “Mark as Answer”.
Thanks,
Kangkan Goswami |Technical Architect| Blog: http://www.geekays.net/
http://in.linkedin.com/in/kangkanHi,
Rest service is not available to retrieve the data in forms.
For this issue, the common workaround I know is to first populate the form fields as columns in form library, then retrieve the columns value instead. You can also use rest service in this way.
If it is not convenience this way, please provide more information about the scenario to get the data.
Thanks,
Qiao Wei
TechNet Community Support -
Is there a better way to do this projection/aggregate query?
Hi,
Summary:
Can anyone offer advice on how best to use JDO to perform
projection/aggregate queries? Is there a better way of doing what is
described below?
Details:
The web application I'm developing includes a GUI for ad-hoc reports on
JDO's. Unlike 3rd party tools that go straight to the database we can
implement business rules that restrict access to objects (by adding extra
predicates) and provide extra calculated fields (by adding extra get methods
to our JDO's - no expression language yet). We're pleased with the results
so far.
Now I want to make it produce reports with aggregates and projections
without instantiating JDO instances. Here is an example of the sort of thing
I want it to be capable of doing:
Each asset has one associated t.description and zero or one associated
d.description.
For every distinct combination of t.description and d.description (skip
those for which there are no assets)
calculate some aggregates over all the assets with these values.
and here it is in SQL:
select t.description type, d.description description, count(*) count,
sum(a.purch_price) sumPurchPrice
from assets a
left outer join asset_descriptions d
on a.adesc_no = d.adesc_no,
asset_types t
where a.atype_no = t.atype_no
group by t.description, d.description
order by t.description, d.description
it takes <100ms to produce 5300 rows from 83000 assets.
The nearest I have managed with JDO is (pseodo code):
perform projection query to get t.description, d.description for every asset
loop on results
if this is first time we've had this combination of t.description,
d.description
perform aggregate query to get aggregates for this combination
The java code is below. It takes about 16000ms (with debug/trace logging
off, c.f. 100ms for SQL).
If the inner query is commented out it takes about 1600ms (so the inner
query is responsible for 9/10ths of the elapsed time).
Timings exclude startup overheads like PersistenceManagerFactory creation
and checking the meta data against the database (by looping 5 times and
averaging only the last 4) but include PersistenceManager creation (which
happens inside the loop).
It would be too big a job for us to directly generate SQL from our generic
ad-hoc report GUI, so that is not really an option.
KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
q1.setResult(
"assetType.description, assetDescription.description");
q1.setOrdering(
"assetType.description ascending,
assetDescription.description ascending");
KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
q2.setResult("count(purchPrice), sum(purchPrice)");
q2.declareParameters(
"String myAssetType, String myAssetDescription");
q2.setFilter(
"assetType.description == myAssetType &&
assetDescription.description == myAssetDescription");
q2.compile();
Collection results = (Collection) q1.execute();
Set distinct = new HashSet();
for (Iterator i = results.iterator(); i.hasNext();) {
Object[] cols = (Object[]) i.next();
String assetType = (String) cols[0];
String assetDescription = (String) cols[1];
String type_description =
assetDescription != null
? assetType + "~" + assetDescription
: assetType;
if (distinct.add(type_description)) {
Object[] cols2 =
(Object[]) q2.execute(assetType,
assetDescription);
// System.out.println(
// "type "
// + assetType
// + ", description "
// + assetDescription
// + ", count "
// + cols2[0]
// + ", sum "
// + cols2[1]);
q2.closeAll();
q1.closeAll();Neil,
It sounds like the problem that you're running into is that Kodo doesn't
yet support the JDO2 grouping constructs, so you're doing your own
grouping in the Java code. Is that accurate?
We do plan on adding direct grouping support to our aggregate/projection
capabilities in the near future, but as you've noticed, those
capabilities are not there yet.
-Patrick
Neil Bacon wrote:
Hi,
Summary:
Can anyone offer advice on how best to use JDO to perform
projection/aggregate queries? Is there a better way of doing what is
described below?
Details:
The web application I'm developing includes a GUI for ad-hoc reports on
JDO's. Unlike 3rd party tools that go straight to the database we can
implement business rules that restrict access to objects (by adding extra
predicates) and provide extra calculated fields (by adding extra get methods
to our JDO's - no expression language yet). We're pleased with the results
so far.
Now I want to make it produce reports with aggregates and projections
without instantiating JDO instances. Here is an example of the sort of thing
I want it to be capable of doing:
Each asset has one associated t.description and zero or one associated
d.description.
For every distinct combination of t.description and d.description (skip
those for which there are no assets)
calculate some aggregates over all the assets with these values.
and here it is in SQL:
select t.description type, d.description description, count(*) count,
sum(a.purch_price) sumPurchPrice
from assets a
left outer join asset_descriptions d
on a.adesc_no = d.adesc_no,
asset_types t
where a.atype_no = t.atype_no
group by t.description, d.description
order by t.description, d.description
it takes <100ms to produce 5300 rows from 83000 assets.
The nearest I have managed with JDO is (pseodo code):
perform projection query to get t.description, d.description for every asset
loop on results
if this is first time we've had this combination of t.description,
d.description
perform aggregate query to get aggregates for this combination
The java code is below. It takes about 16000ms (with debug/trace logging
off, c.f. 100ms for SQL).
If the inner query is commented out it takes about 1600ms (so the inner
query is responsible for 9/10ths of the elapsed time).
Timings exclude startup overheads like PersistenceManagerFactory creation
and checking the meta data against the database (by looping 5 times and
averaging only the last 4) but include PersistenceManager creation (which
happens inside the loop).
It would be too big a job for us to directly generate SQL from our generic
ad-hoc report GUI, so that is not really an option.
KodoQuery q1 = (KodoQuery) pm.newQuery(Asset.class);
q1.setResult(
"assetType.description, assetDescription.description");
q1.setOrdering(
"assetType.description ascending,
assetDescription.description ascending");
KodoQuery q2 = (KodoQuery) pm.newQuery(Asset.class);
q2.setResult("count(purchPrice), sum(purchPrice)");
q2.declareParameters(
"String myAssetType, String myAssetDescription");
q2.setFilter(
"assetType.description == myAssetType &&
assetDescription.description == myAssetDescription");
q2.compile();
Collection results = (Collection) q1.execute();
Set distinct = new HashSet();
for (Iterator i = results.iterator(); i.hasNext();) {
Object[] cols = (Object[]) i.next();
String assetType = (String) cols[0];
String assetDescription = (String) cols[1];
String type_description =
assetDescription != null
? assetType + "~" + assetDescription
: assetType;
if (distinct.add(type_description)) {
Object[] cols2 =
(Object[]) q2.execute(assetType,
assetDescription);
// System.out.println(
// "type "
// + assetType
// + ", description "
// + assetDescription
// + ", count "
// + cols2[0]
// + ", sum "
// + cols2[1]);
q2.closeAll();
q1.closeAll(); -
Best way to limit results from a query.
I need to limit the number of results returned from the database to the first x rows. What I need is a database independant way to achieve this - i.e. I can't use LIMIT, TOP, etc. in my query.
My question therefore is does statment.setMaxRows(x) restrict my results at the database level or my application's view to the database? I.e. will the complete SELECT * run within the database but the first x will be returned to the resultset, which could be quite a performance drain at db level. Or will this have the same effect as running a query SELECT TOP X FROM... which will be a much quicker query to run?
Any help greatly appreciated!The best, and only fool proof way to limit results (rows, columns) from a query, and not waste database resources is to code your query so it only returns the rows and columns that you need. That might mean providing date ranges, sequence number ranges and all other WHERE criteria that can be used to filter the data to exactly what you need.
There are ways to limit what is available to your Java program, but in most cases, the entire SQL must be completed before any results can be returned. The simplest example is; using the SORT command to get the data into the order you require, will first require that all rows be made available to the sort. The fact that you only want the top 5 rows doesn't matter to the DBMS because it cannot give them to you incrementally.
Focus on getting your SQL as efficient as possible. If the SQL runs efficiently, you can use almost any of the JDBC commands to limit, list, cursor forward and backward, etc, without regard to their overhead. -
What is the best way to reuse functions in MVC environment?
I have several commands (using Cairngorm MVC) which use the
same type of function that returns a value. I would like to have
this function in one place, and reuse it, rather than duplicate the
code in each command. What is the best way to do this?You could put it in an actionscript file, and then just
import that file in wherever you need it. -
DB6CONV/DB6 - Is there a way to reuse existing compression dictionary
As some of you probably know, DB6CONV handles compression by retaining the Compression flag while performing the table move , but unfortunately not the compression dictionary. This is causing some issues for tables to increase in size because of the way new dictionary gets built based on sample 1000 rows. Please see an example below where the Data Size of a compressed table increased after the tablemove.
For table SAPR3. /BI0/ACCA_O0900, please see the below statistics.
Old Table:
Data Size = 10,423,576 KB
Index Size = 9,623,776 KB
New Table (both data and index) with Index Compression (Moved using DB6CONV, took 1 hr for 100 million row table):
Data Size = 16,352,352 KB
Index Size = 4,683,296 KB
Reorg table with reset dictionary (By DB2 CLP takes 1hr and 32 Min)
Data Size = 8,823,776 KB
Index Size = 4,677,792 KB
We are on DB2 9.5 and will soon be migrating to DB2 9.7. In order to use the reclaimable table space feature that comes with DB2 9.7, We are planning on creating new tablespaces ( especially for objects like PSA/Change logs in BW) and then move the compressed tables after enabling index compression, but the DB6CONV is not going to be the right tool based on our experience.
Is there a way for DB6CONV or DB2 to reuse an existing compressed dictionary of the source table and reuse it when it performs a table move to a new tablespace.
Thanks,
Rajahi raja,
no, DB6CONV cannot reuse the existing compression dictionary - this is in general not possible.
BUT: the good news is, that the next version V5 of DB6CONV will (amongst other new features) handle compression in a much better way! like R3load and online_table_move the compression dictionary will then be created based on (if possible) 20MB of sampled data ensuring optimal compression.
this new version will become generally available within the next few weeks.
regards, frank
Maybe you are looking for
-
AR/AP Customer line item report.
Hi Gurus, how can i add the reference 3 field to the dynamic display of the FBL5N report . Please advice many thanks Cecile
-
Will an Apple TV work on a regular CRT TV with component A/V Inputs? Thank You
-
I found a script below online that works supposedly well. I just need to know how I can get this code to work in livecycle on my pdf form as I am a novice in coding. If someone can attach an example in pdf form that would be great. Thanks! <SCRIPT LA
-
Importing 10,000's lines of text error
I have several documents that have 10,000's of lines of data updated every month, but every time I import the data I get a problem. Every 6393 lines an extra blank line is inserted. Is this a known error, and is there a patch to fix it? I have tried
-
Firefox won't let me change my homepage from the default
I have recently formatted my computer (doing a full system wipe, not just a settings wipe) and now Firefox isn't behaving. If I change my Firefox homepage to another homepage (IE Google) it will save it until I close out all my Firefox windows, then