Problem with "BULK"
i have a fetch with bulk collect
- In oracle 9.2 the fetch cause no record to be retrieved for the cursor
- In oracle 8i it run well
Thanks
Could you post your code?
Similar Messages
-
Problem with Bulk printing of invoices using VF31.
Hi,
I tried to give bulk print of invoices using VF31, for around 200+ copies. Printing was going on but suddenly stopped after 50 copies get printed.
Situation is the same even after i retry it.
Can someone advice on the reason please.
Thanks.
Best regards,
Srikrishhna13Issue identified and there was a problem with Network.
-
Problem with BULK COLLECT with million rows - Oracle 9.0.1.4
We have a requirement where are supposed to load 58 millions of rows into a FACT Table in our DATA WAREHOUSE. We initially planned to use Oracle Warehouse Builder but due to performance reasons, decided to write custom code. We wrote a custome procedure which opens a simple cursor and reads all the 58 million rows from the SOURCE Table and in a loop processes the rows and inserts the records into a TARGET Table. The logic works fine but it took 20hrs to complete the load.
We then tried to leverage the BULK COLLECT and FORALL and PARALLEL options and modified our PL/SQL code completely to reflect these. Our code looks very simple.
1. We declared PL/SQL BINARY_INDEXed Tables to store the data in memory.
2. We used BULK COLLECT into FETCH the data.
3. We used FORALL statement while inserting the data.
We did not introduce any of our transformation logic yet.
We tried with the 600,000 records first and it completed in 1 min and 29 sec with no problems. We then doubled the no. of rows to 1.2 million and the program crashed with the following error:
ERROR at line 1:
ORA-04030: out of process memory when trying to allocate 16408 bytes (koh-kghu
call ,pmucalm coll)
ORA-06512: at "VVA.BULKLOAD", line 66
ORA-06512: at line 1
We got the same error even with 1 million rows.
We do have the following configuration:
SGA - 8.2 GB
PGA
- Aggregate Target - 3GB
- Current Allocated - 439444KB (439 MB)
- Maximum allocated - 2695753 KB (2.6 GB)
Temp Table Space - 60.9 GB (Total)
- 20 GB (Available approximately)
I think we do have more than enough memory to process the 1 million rows!!
Also, some times the same program results in the following error:
SQL> exec bulkload
BEGIN bulkload; END;
ERROR at line 1:
ORA-03113: end-of-file on communication channel
We did not even attempt the full load. Also, we are not using the PARALLEL option yet.
Are we hitting any bug here? Or PL/SQL is not capable of mass loads? I would appreciate any thoughts on this?
Thanks,
Haranadh
Following is the code:
set echo off
set timing on
create or replace procedure bulkload as
-- SOURCE --
TYPE src_cpd_dt IS TABLE OF ima_ama_acct.cpd_dt%TYPE;
TYPE src_acqr_ctry_cd IS TABLE OF ima_ama_acct.acqr_ctry_cd%TYPE;
TYPE src_acqr_pcr_ctry_cd IS TABLE OF ima_ama_acct.acqr_pcr_ctry_cd%TYPE;
TYPE src_issr_bin IS TABLE OF ima_ama_acct.issr_bin%TYPE;
TYPE src_mrch_locn_ref_id IS TABLE OF ima_ama_acct.mrch_locn_ref_id%TYPE;
TYPE src_ntwrk_id IS TABLE OF ima_ama_acct.ntwrk_id%TYPE;
TYPE src_stip_advc_cd IS TABLE OF ima_ama_acct.stip_advc_cd%TYPE;
TYPE src_authn_resp_cd IS TABLE OF ima_ama_acct.authn_resp_cd%TYPE;
TYPE src_authn_actvy_cd IS TABLE OF ima_ama_acct.authn_actvy_cd%TYPE;
TYPE src_resp_tm_id IS TABLE OF ima_ama_acct.resp_tm_id%TYPE;
TYPE src_mrch_ref_id IS TABLE OF ima_ama_acct.mrch_ref_id%TYPE;
TYPE src_issr_pcr IS TABLE OF ima_ama_acct.issr_pcr%TYPE;
TYPE src_issr_ctry_cd IS TABLE OF ima_ama_acct.issr_ctry_cd%TYPE;
TYPE src_acct_num IS TABLE OF ima_ama_acct.acct_num%TYPE;
TYPE src_tran_cnt IS TABLE OF ima_ama_acct.tran_cnt%TYPE;
TYPE src_usd_tran_amt IS TABLE OF ima_ama_acct.usd_tran_amt%TYPE;
src_cpd_dt_array src_cpd_dt;
src_acqr_ctry_cd_array src_acqr_ctry_cd;
src_acqr_pcr_ctry_cd_array src_acqr_pcr_ctry_cd;
src_issr_bin_array src_issr_bin;
src_mrch_locn_ref_id_array src_mrch_locn_ref_id;
src_ntwrk_id_array src_ntwrk_id;
src_stip_advc_cd_array src_stip_advc_cd;
src_authn_resp_cd_array src_authn_resp_cd;
src_authn_actvy_cd_array src_authn_actvy_cd;
src_resp_tm_id_array src_resp_tm_id;
src_mrch_ref_id_array src_mrch_ref_id;
src_issr_pcr_array src_issr_pcr;
src_issr_ctry_cd_array src_issr_ctry_cd;
src_acct_num_array src_acct_num;
src_tran_cnt_array src_tran_cnt;
src_usd_tran_amt_array src_usd_tran_amt;
j number := 1;
CURSOR c1 IS
SELECT
cpd_dt,
acqr_ctry_cd ,
acqr_pcr_ctry_cd,
issr_bin,
mrch_locn_ref_id,
ntwrk_id,
stip_advc_cd,
authn_resp_cd,
authn_actvy_cd,
resp_tm_id,
mrch_ref_id,
issr_pcr,
issr_ctry_cd,
acct_num,
tran_cnt,
usd_tran_amt
FROM ima_ama_acct ima_ama_acct
ORDER BY issr_bin;
BEGIN
OPEN c1;
FETCH c1 bulk collect into
src_cpd_dt_array ,
src_acqr_ctry_cd_array ,
src_acqr_pcr_ctry_cd_array,
src_issr_bin_array ,
src_mrch_locn_ref_id_array,
src_ntwrk_id_array ,
src_stip_advc_cd_array ,
src_authn_resp_cd_array ,
src_authn_actvy_cd_array ,
src_resp_tm_id_array ,
src_mrch_ref_id_array ,
src_issr_pcr_array ,
src_issr_ctry_cd_array ,
src_acct_num_array ,
src_tran_cnt_array ,
src_usd_tran_amt_array ;
CLOSE C1;
FORALL j in 1 .. src_cpd_dt_array.count
INSERT INTO ima_dly_acct (
CPD_DT,
ACQR_CTRY_CD,
ACQR_TIER_CD,
ACQR_PCR_CTRY_CD,
ACQR_PCR_TIER_CD,
ISSR_BIN,
OWNR_BUS_ID,
USER_BUS_ID,
MRCH_LOCN_REF_ID,
NTWRK_ID,
STIP_ADVC_CD,
AUTHN_RESP_CD,
AUTHN_ACTVY_CD,
RESP_TM_ID,
PROD_REF_ID,
MRCH_REF_ID,
ISSR_PCR,
ISSR_CTRY_CD,
ACCT_NUM,
TRAN_CNT,
USD_TRAN_AMT)
VALUES (
src_cpd_dt_array(j),
src_acqr_ctry_cd_array(j),
null,
src_acqr_pcr_ctry_cd_array(j),
null,
src_issr_bin_array(j),
null,
null,
src_mrch_locn_ref_id_array(j),
src_ntwrk_id_array(j),
src_stip_advc_cd_array(j),
src_authn_resp_cd_array(j),
src_authn_actvy_cd_array(j),
src_resp_tm_id_array(j),
null,
src_mrch_ref_id_array(j),
src_issr_pcr_array(j),
src_issr_ctry_cd_array(j),
src_acct_num_array(j),
src_tran_cnt_array(j),
src_usd_tran_amt_array(j));
COMMIT;
END bulkload;
SHOW ERRORS
-----------------------------------------------------------------------------do you have a unique key available in the rows you are fetching?
It seems a cursor with 20 million rows that is as wide as all the columnsyou want to work with is a lot of memory for the server to use at once. You may be able to do this with parallel processing (dop over 8) and a lot of memory for the warehouse box (and the box you are extracting data from)...but is this the most efficient (and thereby fastest) way to do it?
What if you used a cursor to select a unique key only, and then during the cursor loop fetch each record, transform it, and insert it into the target?
Its a different way to do a lot at once, but it cuts down on the overall memory overhead for the process.
I know this isnt as elegant as a single insert to do it all at once, but sometimes trimming a process down so it takes less resources at any given moment is much faster than trying to do the whole thing at once.
My solution is probably biased by transaction systems, so I would be interested in what the data warehouse community thinks of this.
For example:
source table my_transactions (tx_seq_id number, tx_fact1 varchar2(10), tx_fact2 varchar2(20), tx_fact3 number, ...)
select a cursor of tx_seq_id only (even at 20 million rows this is not much)
you could then either use a for loop or even bulk collect into a plsql collection or table
then process individually like this:
procedure process_a_tx(p_tx_seq_id in number)
is
rTX my_transactions%rowtype;
begin
select * into rTX from my_transactions where tx_seq_id = p_tx_seq_id;
--modify values as needed
insert into my_target(a, b, c) values (rtx.fact_1, rtx.fact2, rtx.fact3);
commit;
exception
when others
rollback;
--write to a log or raise and exception
end process_a_tx;
procedure collect_tx
is
cursor tx is
select tx_seq_id from my_transactions;
begin
for rTx in cTx loop
process_a_tx(rtx.tx_seq_id);
end loop;
end collect_tx; -
Problem with Bulk Collect ... FORALL
I've written following code to bulk collect records from a cursor into a collection and insert it into a table using FORALL loop
OPEN x;
LOOP
FETCH x BULK COLLECT INTO v_collection LIMIT 1000;
FORALL i IN 1..v_collection.count
INSERT INTO tablename(column1, column2) VALUES(v_collection(i).val1, v_collection(i).val2);
COMMIT;
EXIT WHEN x%NOTFOUND;
END LOOP;I have verified that query which is executed by cursor returns records but when my procedure is executed the insert statement inside the FORALL loop never executes. Am I missing something over here?
Regards,
FahadYes, the cursor is returning a row.
I've found the solution myself. There was a trigger which was deleting data on commit. Due to this, the records were not inserting. -
Problem with bulk update and inheritance
Employee and Customer are subclasses of Person (strategy "SINGLE_TABLE"). Both are entities.
I launch a bulk update to modify the salary of all the employees:
em.getTransaction().begin();
Query q = em.createQuery("update Employee e set e.salary = 2000");
int n = q.executeUpdate();
em.getTransaction().commit();
In the table PERSON, the column "salary" of all the lines, even the lines of the customers !, is modified to 2000.
Can you tell me whether it is a bug of TopLink or I have made an error?
I have used glassfish-persistence-installer-v2-b23.jar to install toplink-essentials.I filed a new TopLink bug https://glassfish.dev.java.net/issues/show_bug.cgi?id=1448
-
Problems with my Creative Sound Blaster X-Fi Xtreme Gamer bulk P
Hey there!!!
I acutally have a Problem with my Creative Sound Blaster X-Fi Xtreme Gamer bulk PCI-Card. The Card by it self is working perfectly. But i cant use the Digital-In as the Mic in. I dont know how to turn it so that the Digital-In is the Mic-In. So my Microphone donst work. I hear everything but nobody hears my Voice. There is sometimes a Signal but not the correct, the Guys playing with me dont hear anything of my voice...
Can somebody help me plz?
Greetings ThomasMoep wrote:
Why i must use TORRENT?
Why is this driver not on your page?!
Please fast help...
and this is a secure download you mean? ....
Jeez, calm down already. I never said you HAD to download it, but if you want any of the applications that work in XP to work in Vista then you're going to need that disk; either by downloading it or fromk here,?http://us.creative.com/products/prod...7&product=6743?
2. ISN'T THIS THE DRIVER FOR ME?! http://forums.creative.com/creativelabs/board/message?board.id=Vista&message.id=7838Message Edited by Moep on 07-3-20070:7 PM
Maybe you really should take a vallium or something and re-read what I posted, then you may just grasp the fact that that is the exact driver I told you you needed. -
SB Live 24-bit! Internal (BULK),having problems with "some" sou
excuse me for my bad english,i have a problem with my soundblaster 24-bit! Internal BULK.
its playing all the music and sounds alright but there some sounds who make scratching noises.in soulreaver its the intro music and in battlefield 2 its the sounds of the jets turbines.in wow its the sounds of the specialmoves ...
mp3 and movies play without any sounderror,the games too just thes few sounds and music tracks are scratching but i can hear the sound clear too.
any help would be great,
thx spike383
edit:
MSI K8N Neo2 Platinum Nforce 3 Ultra (Drivers 5.0)
AMD 64bit 3000+ Socket 939 Winchester (Arctic 64)
024MB KingstonValueRam 400mhz (cl2.5)jaja i no i no...
--(changing to 2048MB Corsair XMS(DDR400, CMX024-3200C2)-- ^^
Gainward Ultra 2400/GS GLH (400/200-NVSilencer.5.rev2) Forceware 77.30
Sounblaster Li've 24-bit! 7.DD Internal (BULK)
Sharkoon Silentstorm 370w PSU
Thermaltake Soprano.Message Edited by Spike83 on 07-08-2005 05:08 AMMessage Edited by Spike83 on 07-08-2005 05:34 AMwell doesnt any know how to help me out,not even the ppl from creative ?
i noticed all the companys are lame and sleepy nothing doin ppl,like gainward
or benq these su** too.any help would be nice dont just sell **bleep** creative help
ur customers too it would be nice.the system was formated be4 i installed all the software,ac97 controlle was turned off in bios but the sounds scratch in some
sounds and music files,not in all tho.so what can i do!? -
Problem with setting oracle type parameter in viewobject query
Hi There,
I am facing a problem with JDev1013. I have a view that has JDBC positional parameters that are supposed to be in parameters for function like:
SELECT x.day, x.special_exact_period_only
FROM (
SELECT x.day, x.special_exact_period_only
FROM (
SELECT
x.day,
rb.special_exact_period_only
FROM TABLE (
RentabilityPkg.findMarkerSlots(
'start',
? /* dchannel */,
NULL,
? /* resorts */,
'special',
NULL,
? /* code */,
NULL,
TRUNC(SYSDATE),
TRUNC(SYSDATE + 365 * 2),
NULL
) x
JOIN resourcebase rb USING (rentabilitymanager_id)
UNION
SELECT
x.day,
rb.special_exact_period_only
FROM TABLE (
RentabilityPkg.findMarkerSlots(
'start',
? /* dchannel */,
NULL,
? /* resorts */,
'composition',
NULL,
? /* code */,
NULL,
TRUNC(SYSDATE),
TRUNC(SYSDATE + 365 * 2),
NULL
) x
JOIN resourcebase rb USING (rentabilitymanager_id)
)x
ORDER BY x.day
) x
WHERE ROWNUM <= 30now the JDBC positional parameters take our custom defined list type defined as:
CREATE TYPE NumberList AS TABLE OF NUMBER;
we are setting the parameter in the views with the help of oracle.sql.ARRAY class like:
* Set parameters.
public void setParams(Integer dchannelId, Integer[] resorts, String specialCode)
try {
System.out.println(this.getClass() + ".setParams()");
ARRAY arrParam1 = ((NWSApplicationModule)getApplicationModule()).toSQLNumberList(Arrays.asList(resorts));
ARRAY arrParam2 = ((NWSApplicationModule)getApplicationModule()).toSQLNumberList(Arrays.asList(resorts));
System.out.println("arrParam1 - " + arrParam1);
System.out.println("arrParam1 - " + arrParam1);
System.out.println(this.getClass() + " ARRAY - " + arrParam1.getArray());
System.out.println(this.getClass() + " -- " + arrParam1.length());
System.out.println("arrParam2 - " + arrParam2);
System.out.println("arrParam2 - " + arrParam2);
System.out.println(this.getClass() + " ARRAY - " + arrParam2.getArray());
System.out.println(this.getClass() + " -- " + arrParam2.length());
Object[] params =
{ dchannelId,
arrParam1,
specialCode,
dchannelId,
arrParam2,
specialCode
setWhereClauseParams(params);
System.out.println("DONE WITH " + this.getClass() + ".setParams()");
catch(Exception ex)
ex.printStackTrace(System.out);
}the toSQLNumberList() method is defined in our App module baseclass as follows:
public ARRAY toSQLNumberList(Collection coll)
debug("toSQLNumberList()");
DBTransaction txn = (DBTransaction)getTransaction();
debug("txn - " + txn + " : " + txn.getClass());
return NWSUtil.toSQLNumberList(coll, getConnection(txn));
public static ARRAY toSQLNumberList(Collection c, Connection connection)
//printTrace();
debug("toSQLNumberList()");
try
ArrayDescriptor numberList = ArrayDescriptor.createDescriptor("NUMBERLIST", connection);
NUMBER[] elements = new NUMBER[c == null ? 0 : c.size()];
if (elements.length > 0 )
Iterator iter = c.iterator();
for (int i = 0; iter.hasNext(); i++)
elements[i] = new NUMBER(iter.next().toString());
return new ARRAY(numberList, connection, elements);
catch (Exception ex)
ex.printStackTrace();
return null;
protected Connection getConnection(DBTransaction dbTransaction)
//return null;
debug("Inside getConnection()");
CallableStatement s = null;
try
* Getting Conenction in BC4J is dirty but its better
* as otherwise we might end up coding with connections
* and the Transaction Integrety will be
s = dbTransaction.createCallableStatement("BEGIN NULL; END;", 0);
debug("DOING s.getConnection()...");
Connection conn = s.getConnection();
debug("DONE WITH s.getConnection()...");
/*try
throw new Exception("TEST");
catch (Exception ex)
ex.printStackTrace(System.out);
debug("conn CLASS - " + conn.getClass());
return conn;
catch (Exception ex)
ex.printStackTrace();
return null;
finally
try { s.close(); }
catch (Exception ex) {}
}Whenever we try setting the parameters in view using setParams() and use this view to set the model of a java control it thorws the following exception :
[2006-10-10 12:34:48,797 AWT-EventQueue-0 ERROR] JBO-28302: Piggyback write error
oracle.jbo.PiggybackException: JBO-28302: Piggyback write error
at oracle.jbo.common.PiggybackOutput.getPiggybackStream(PiggybackOutput.java:185)
at oracle.jbo.common.JboServiceMessage.marshalRefs(JboServiceMessage.java:267)
at oracle.jbo.server.remote.PiggybackManager.marshalServiceMessage(PiggybackManager.java:343)
at oracle.jbo.server.remote.PiggybackManager.marshalServiceMessage(PiggybackManager.java:316)
at oracle.jbo.server.remote.AbstractRemoteApplicationModuleImpl.processMessage(AbstractRemoteApplicationModuleImpl.java:2283)
at oracle.jbo.server.ApplicationModuleImpl.doMessage(ApplicationModuleImpl.java:7509)
at oracle.jbo.server.remote.AbstractRemoteApplicationModuleImpl.sync(AbstractRemoteApplicationModuleImpl.java:2221)
at oracle.jbo.server.remote.ejb.ServerApplicationModuleImpl.doMessage(ServerApplicationModuleImpl.java:79)
at oracle.jbo.server.ejb.SessionBeanImpl.doMessage(SessionBeanImpl.java:474)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.ejb.interceptor.joinpoint.EJBJoinPointImpl.invoke(EJBJoinPointImpl.java:35)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.TxBeanManagedInterceptor.invoke(TxBeanManagedInterceptor.java:53)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:69)
at com.evermind.server.ejb.StatefulSessionEJBObject.OC4J_invokeMethod(StatefulSessionEJBObject.java:840)
at RemoteAMReservation_StatefulSessionBeanWrapper906.doMessage(RemoteAMReservation_StatefulSessionBeanWrapper906.java:286)
at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.rmi.RmiMethodCall.run(RmiMethodCall.java:53)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)
## Detail 0 ##
java.io.NotSerializableException: oracle.jdbc.driver.T4CConnection
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1075)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1369)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1341)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1284)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1073)
at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1245)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1069)
at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1245)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1069)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:291)
at oracle.jbo.common.SvcMsgResponseValues.writeObject(SvcMsgResponseValues.java:116)
at sun.reflect.GeneratedMethodAccessor65.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:890)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1333)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1284)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1073)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:291)
at oracle.jbo.common.PiggybackOutput.getPiggybackStream(PiggybackOutput.java:173)
at oracle.jbo.common.JboServiceMessage.marshalRefs(JboServiceMessage.java:267)
at oracle.jbo.server.remote.PiggybackManager.marshalServiceMessage(PiggybackManager.java:343)
at oracle.jbo.server.remote.PiggybackManager.marshalServiceMessage(PiggybackManager.java:316)
at oracle.jbo.server.remote.AbstractRemoteApplicationModuleImpl.processMessage(AbstractRemoteApplicationModuleImpl.java:2283)
at oracle.jbo.server.ApplicationModuleImpl.doMessage(ApplicationModuleImpl.java:7509)
at oracle.jbo.server.remote.AbstractRemoteApplicationModuleImpl.sync(AbstractRemoteApplicationModuleImpl.java:2221)
at oracle.jbo.server.remote.ejb.ServerApplicationModuleImpl.doMessage(ServerApplicationModuleImpl.java:79)
at oracle.jbo.server.ejb.SessionBeanImpl.doMessage(SessionBeanImpl.java:474)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.ejb.interceptor.joinpoint.EJBJoinPointImpl.invoke(EJBJoinPointImpl.java:35)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.TxBeanManagedInterceptor.invoke(TxBeanManagedInterceptor.java:53)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:69)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:69)
at com.evermind.server.ejb.StatefulSessionEJBObject.OC4J_invokeMethod(StatefulSessionEJBObject.java:840)
at RemoteAMReservation_StatefulSessionBeanWrapper906.doMessage(RemoteAMReservation_StatefulSessionBeanWrapper906.java:286)
at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.rmi.RmiMethodCall.run(RmiMethodCall.java:53)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)This is a typical interaction between 2 server-side components (view-object and app module). Now the question is why is this exception thrown? Any answers?
This application is one that we have migrated from 904 to 1013 and are trying to get it running in 3-tier.
Regards,
AnupamSorry I missed out some semicolons, the script follws:
-- The following TABLE was created to simulate the issue
CREATE TABLE TEST_OBJECT
ASSET_ID NUMBER,
OBJECT_ID NUMBER,
NAME VARCHAR2(50)
INSERT INTO TEST_OBJECT VALUES(1,1,'AAA');
INSERT INTO TEST_OBJECT VALUES(2,2,'BBB');
INSERT INTO TEST_OBJECT VALUES(3,3,'CCC');
COMMIT;
SELECT * FROM TEST_OBJECT;
-- The following TYPES was created to simulate the issue
CREATE OR REPLACE
TYPE DUTYRESULTOBJECTTAB AS TABLE OF DUTYRESULTOBJECT;
CREATE OR REPLACE
type DutyResultObject as object
( ASSET_ID number,
OBJECT_ID number,
NAME varchar2(150)
-- The following PACKAGE N FUNCTION was created to simulate the issue
CREATE OR REPLACE PACKAGE TESTOBJECTPKG
IS
FUNCTION OBJECTSEARCH(P_RESOURCE IN NUMBERLIST) RETURN DUTYRESULTOBJECTTAB;
END;
CREATE OR REPLACE PACKAGE BODY TESTOBJECTPKG
IS
FUNCTION OBJECTSEARCH(P_RESOURCE IN NUMBERLIST) RETURN DUTYRESULTOBJECTTAB
IS
BULKDUTYRESULTOBJECTTAB DUTYRESULTOBJECTTAB;
BEGIN
SELECT DUTYRESULTOBJECT(ASSET_ID, OBJECT_ID, NAME)
BULK COLLECT INTO BULKDUTYRESULTOBJECTTAB
FROM TEST_OBJECT;
RETURN BULKDUTYRESULTOBJECTTAB;
END;
END;
[\code] -
Problem with READ Statement in the field routine of the Transformation
Hi,
I have problem with read statement with binary search in the field routine of the transformation.
read statement is working well when i was checked in the debugging mode, it's not working properly for the bulk load in the background. below are the steps i have implemented in my requirement.
1. I selected the record from the lookuo DSO into one internal table for all entried in source_packeage.
2.i have read same internal table in the field routine for each source_package entry and i am setting the flag for that field .
Code in the start routine
select source accno end_dt acctp from zcam_o11
into table it_zcam
for all entries in source_package
where source = source_package-source
and accno = source_package-accno.
if sy-subrc = 0.
delete it_zcam where acctp <> 3.
delete it_zcam where end_dt initial.
sort it_zcam by surce accno.
endif.
field routine code:
read table it_zcam with key source = source_package-source
accno = source_package-accno
binary search
transportin no fields.
if sy-subrc = 0.
RESULT = 'Y'.
else.
RESULT = 'N'.
endif.
this piece of code exist in the other model there its working fine.when comes to my code it's not working properly, but when i debug the transformation it's working fine for those accno.
the problem is when i do full load the code is not working properly and populating the wrong value in the RESULT field.
this field i am using in the report filter.
please let me know if anybody has the soluton or reason for this strage behaviour.
thanks,
Rahim.i suppose the below is not the actual code. active table of dso would be /bic/azcam_o1100...
1. is the key of zcam_o11 source and accno ?
2. you need to get the sortout of if endif (see code below)
select source accno end_dt acctp from zcam_o11
into table it_zcam
for all entries in source_package
where source = source_package-source
and accno = source_package-accno.
if sy-subrc = 0.
delete it_zcam where acctp 3.
delete it_zcam where end_dt initial.
endif.
sort it_zcam by surce accno.
field routine code:
read table it_zcam with key source = source_package-source
accno = source_package-accno
binary search
transportin no fields.
if sy-subrc = 0.
RESULT = 'Y'.
else.
RESULT = 'N'.
endif. -
Problem with USB External Hard Disk Drive
I have similar problem with hard disk MK6025GAS in Sweex casing connected via USB as Raistlfiren in this post but I am not sure if it has something to do with kernel. The problem is that when I plug the hard disk via USB it is not even shown with in /dev/ or by fdisk -l. I had similar problems with the drive before but it was always shown in /dev.
I got same output from dmesg as Raistlfiren in the post before
# dmesg | tail
sd 4:0:0:0: [sdd] ASC=0x0 ASCQ=0x0
sd 4:0:0:0: [sdd] Sense Key : 0x0 [current]
Info fld=0x0
I was browsing net for a lot of time to find a solution but nothing helped a lot. The problem is closes to the one described on Gentoo Forum
I can see that it is recognized by computer since it is shown with lsusb
# lsusb
Bus 001 Device 005: ID 13fd:0540 Initio Corporation
# lsusb -d 13fd:0540 -v
Bus 001 Device 005: ID 13fd:0540 Initio Corporation
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 64
idVendor 0x13fd Initio Corporation
idProduct 0x0540
bcdDevice 0.00
iManufacturer 1 Initio
iProduct 2 MK6025GAS
iSerial 3 0010100500000000
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 32
bNumInterfaces 1
bConfigurationValue 1
iConfiguration 0
bmAttributes 0xc0
Self Powered
MaxPower 2mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 2
bInterfaceClass 8 Mass Storage
bInterfaceSubClass 6 SCSI
bInterfaceProtocol 80 Bulk (Zip)
iInterface 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x02 EP 2 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0200 1x 512 bytes
bInterval 1
Device Qualifier (for other device speed):
bLength 10
bDescriptorType 6
bcdUSB 2.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 64
bNumConfigurations 1
Device Status: 0x0001
Self Powered
From the beginning I though and I still think that the partition table is screwed up but the programs like TestDisk and fixdisktable work only with disks shown in /dev/
Additionally, I have checked the content of /var/log/kernel.log
Sep 16 22:03:58 hramat kernel: usb 1-2: new high speed USB device using ehci_hcd and address 5
Sep 16 22:03:58 hramat kernel: usb 1-2: configuration #1 chosen from 1 choice
Sep 16 22:03:58 hramat kernel: scsi4 : SCSI emulation for USB Mass Storage devices
Sep 16 22:03:58 hramat kernel: usb-storage: device found at 5
Sep 16 22:03:58 hramat kernel: usb-storage: waiting for device to settle before scanning
Sep 16 22:04:03 hramat kernel: scsi 4:0:0:0: Direct-Access Initio MK6025GAS 2.23 PQ: 0 ANSI: 0
Sep 16 22:04:03 hramat kernel: sd 4:0:0:0: Attached scsi generic sg4 type 0
Sep 16 22:04:03 hramat kernel: usb-storage: device scan complete
Sep 16 22:04:03 hramat kernel: sd 4:0:0:0: [sdd] 117210240 512-byte hardware sectors: (60.0 GB/55.8 GiB)
Sep 16 22:04:03 hramat kernel: sd 4:0:0:0: [sdd] Write Protect is off
Sep 16 22:04:03 hramat kernel: sd 4:0:0:0: [sdd] Mode Sense: 86 0b 00 02
Sep 16 22:04:03 hramat kernel: sd 4:0:0:0: [sdd] Assuming drive cache: write through
Sep 16 22:04:03 hramat kernel: sd 4:0:0:0: [sdd] Assuming drive cache: write through
Sep 16 22:04:03 hramat kernel: sdd:<6>sd 4:0:0:0: [sdd] Sense Key : 0x0 [current]
Sep 16 22:04:03 hramat kernel: Info fld=0x0
Sep 16 22:04:03 hramat kernel: sd 4:0:0:0: [sdd] ASC=0x0 ASCQ=0x0
Sep 16 22:04:03 hramat kernel: sd 4:0:0:0: [sdd] Sense Key : 0x0 [current]
and /var/log/errors.log
Sep 16 22:04:03 hramat kernel: sd 4:0:0:0: [sdd] Assuming drive cache: write through
Sep 16 22:04:03 hramat kernel: sd 4:0:0:0: [sdd] Assuming drive cache: write through
Sep 16 22:07:35 hramat kernel: INFO: task async/0:3957 blocked for more than 120 seconds.
Sep 16 22:07:35 hramat kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
The only thing I understand from these logs is that the disk is blocked, therefore not listed in /dev.
Assuming the newer kernel problems I could try to use some older live linux CD to see if that would work. I am also thinking of connecting this hard drive directly to my laptop, using live linux CD and maybe check the output of hdparm. Is there anything else I could check or try?
Thank you for any help or suggestions
MatejThank you nTia89 for response. Sorry for not providing enough information.
I believe the problem is not system dependent. I have dual boot with windows and there the disk has also problems. However, I do have Arch32 with Kernel 2.6.30, using Gnome. hal and dbus are also running.
I did not tried to connect the disk to the computer directly, I will try it today.
Yesterday I have used SystemRescueCD 0.4.1 with Kernel 2.6.22. I wanted to see if it will be recognized by the system and placed in /dev/. Yes it was. This means that the problem highlighted in Gentoo forum can be true, but it doesn't solve my problem. I have tried to connect the drive several times to Arch and it was not shown in /dev/sd*, in SystemRescueCD it was placed as /dev/sdb. Now I am sure that the partition table is screwed up.
So I have started to play with the drive in SystemRescueCD with TestDisk and FixDiskTable but without success.
% fdisk -l
Disk /dev/sda: 100.0 GB, 100030242816 bytes
255 heads, 63 sectors/track, 12161 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 1530 12289693+ 7 HPFS/NTFS
/dev/sda2 1531 6672 41303115 7 HPFS/NTFS
/dev/sda3 6673 12161 44090392+ f W95 Ext'd (LBA)
/dev/sda5 * 6673 11908 42058138+ 83 Linux
/dev/sda6 11909 12161 2032191 82 Linux swap / Solaris
Disk /dev/sdb: 60.0 GB, 60011642880 bytes
64 heads, 32 sectors/track, 57231 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdb doesn't contain a valid partition table
Manufacturer disk geometry: Heads: 16; Cylinders: 16383; Sectors: 63; Logical Blocks (LBA): 117210240
TestDisk found only Linux partitions with 43Gb while disk had only one 60Gb partitioned with FAT32/NTFS.
Also recognized 64 heads, 57231 cylinders and 32 sectors (same as from fdisk -l), which obviously differs from manufacturer disk geometry.
testdisk.log:
Thu Sep 17 19:09:26 2009
Command line: TestDisk
TestDisk 6.8, Data Recovery Utility, August 2007
Christophe GRENIER
Linux version (ext2fs lib: 1.40.2, ntfs lib: 9:0:0, reiserfs lib: 0.3.1-rc8, ewf lib: none)
Using locale 'C'.
Hard disk list
Disk /dev/sda - 100 GB / 93 GiB - CHS 12161 255 63, sector size=512
Disk /dev/sdb - 60 GB / 55 GiB - CHS 57231 64 32, sector size=512
Disk /dev/sdb - 60 GB / 55 GiB
Partition table type: Intel
Interface Advanced
New options :
Dump : No
Cylinder boundary : Yes
Allow partial last cylinder : No
Expert mode : No
Analyse Disk /dev/sdb - 60 GB / 55 GiB - CHS 57231 64 32
Current partition structure:
Partition sector doesn't have the endmark 0xAA55
Ask the user for vista mode
Computes LBA from CHS for Disk /dev/sdb - 60 GB / 55 GiB - CHS 57232 64 32
Allow partial last cylinder : Yes
search_vista_part: 1
search_part()
Disk /dev/sdb - 60 GB / 55 GiB - CHS 57232 64 32
Search for partition aborted
Results
interface_write()
No partition found or selected for recovery
search_part()
Disk /dev/sdb - 60 GB / 55 GiB - CHS 57232 64 32
Search for partition aborted
Results
interface_write()
No partition found or selected for recovery
simulate write!
write_mbr_i386: starting...
Store new MBR code
write_all_log_i386: starting...
No extended partition
Analyse Disk /dev/sdb - 60 GB / 55 GiB - CHS 57232 64 32
Current partition structure:
Partition sector doesn't have the endmark 0xAA55
Ask the user for vista mode
Allow partial last cylinder : Yes
search_vista_part: 1
search_part()
Disk /dev/sdb - 60 GB / 55 GiB - CHS 57232 64 32
Results
interface_write()
No partition found or selected for recovery
search_part()
Disk /dev/sdb - 60 GB / 55 GiB - CHS 57232 64 32
NTFS at 8956/63/32
heads/cylinder 255 (NTFS) != 64 (HD)
sect/track 63 (NTFS) != 32 (HD)
filesystem size 24579387
sectors_per_cluster 8
mft_lcn 1024141
mftmirr_lcn 1650676
clusters_per_mft_record -10
clusters_per_index_record 1
NTFS part_offset=9392094720, part_size=12584646144, sector_size=512
NTFS partition cannot be added (part_offset<part_size).
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 34129 1 1 75201 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=69896224, size=84116272, end=154012495, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 34632 2 1 75704 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=70926400, size=84116272, end=155042671, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 34668 0 1 75740 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=71000064, size=84116272, end=155116335, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 34673 1 1 75745 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=71010336, size=84116272, end=155126607, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 34699 2 1 75771 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=71063616, size=84116272, end=155179887, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 34708 2 1 75780 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=71082048, size=84116272, end=155198319, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 36338 0 1 77410 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=74420224, size=84116272, end=158536495, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 36367 0 1 77439 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=74479616, size=84116272, end=158595887, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 36401 2 1 77473 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=74549312, size=84116272, end=158665583, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 36414 2 1 77486 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=74575936, size=84116272, end=158692207, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 37949 1 1 79021 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=77719584, size=84116272, end=161835855, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 37955 1 1 79027 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=77731872, size=84116272, end=161848143, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 37989 1 1 79061 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=77801504, size=84116272, end=161917775, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 38404 0 1 79476 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=78651392, size=84116272, end=162767663, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 39636 2 1 80708 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=81174592, size=84116272, end=165290863, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 41263 1 1 82335 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=84506656, size=84116272, end=168622927, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 41266 1 1 82338 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=84512800, size=84116272, end=168629071, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 41660 0 1 82732 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=85319680, size=84116272, end=169435951, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 42898 0 1 83970 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=87855104, size=84116272, end=171971375, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 43244 1 1 84316 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=88563744, size=84116272, end=172680015, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 44870 2 1 85942 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=91893824, size=84116272, end=176010095, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 44930 2 1 86002 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=92016704, size=84116272, end=176132975, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 46961 0 1 88033 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=96176128, size=84116272, end=180292399, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 47312 0 1 88384 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=96894976, size=84116272, end=181011247, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 48393 2 1 89465 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=99108928, size=84116272, end=183225199, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 49633 2 1 90705 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=101648448, size=84116272, end=185764719, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 50767 1 1 91839 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=103970848, size=84116272, end=188087119, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 51150 1 1 92222 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=104755232, size=84116272, end=188871503, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 51941 1 1 93013 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=106375200, size=84116272, end=190491471, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 52759 0 1 93831 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=108050432, size=84116272, end=192166703, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 53069 1 1 94141 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=108685344, size=84116272, end=192801615, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 53768 0 1 94840 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=110116864, size=84116272, end=194233135, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 54287 0 1 95359 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=111179776, size=84116272, end=195296047, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 54493 2 1 95565 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=111601728, size=84116272, end=195717999, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 54861 1 1 95933 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=112355360, size=84116272, end=196471631, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 54890 2 1 95962 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=112414784, size=84116272, end=196531055, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 54953 2 1 96025 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=112543808, size=84116272, end=196660079, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 56330 1 1 97402 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=115363872, size=84116272, end=199480143, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 56334 0 1 97406 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=115372032, size=84116272, end=199488303, disk end=117211136)
recover_EXT2: s_block_group_nr=0/320, s_mnt_count=31/34, s_blocks_per_group=32768
recover_EXT2: boot_sector=0, s_blocksize=4096
recover_EXT2: s_blocks_count 10514534
recover_EXT2: part_size 84116272
D Linux 57203 0 1 98275 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
This partition ends after the disk limits. (start=117151744, size=84116272, end=201268015, disk end=117211136)
Disk /dev/sdb - 60 GB / 55 GiB - CHS 57232 64 32
Check the harddisk size: HD jumpers settings, BIOS detection...
The harddisk (60 GB / 55 GiB) seems too small! (< 103 GB / 95 GiB)
The following partitions can't be recovered:
D Linux 34129 1 1 75201 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 34632 2 1 75704 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 34668 0 1 75740 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 34673 1 1 75745 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 34699 2 1 75771 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 34708 2 1 75780 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 36338 0 1 77410 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 36367 0 1 77439 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 36401 2 1 77473 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 36414 2 1 77486 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 37949 1 1 79021 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 37955 1 1 79027 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 37989 1 1 79061 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 38404 0 1 79476 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 39636 2 1 80708 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 41263 1 1 82335 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 41266 1 1 82338 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 41660 0 1 82732 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 42898 0 1 83970 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 43244 1 1 84316 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 44870 2 1 85942 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 44930 2 1 86002 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 46961 0 1 88033 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 47312 0 1 88384 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 48393 2 1 89465 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 49633 2 1 90705 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 50767 1 1 91839 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 51150 1 1 92222 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 51941 1 1 93013 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 52759 0 1 93831 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 53069 1 1 94141 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 53768 0 1 94840 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 54287 0 1 95359 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 54493 2 1 95565 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 54861 1 1 95933 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 54890 2 1 95962 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 54953 2 1 96025 27 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 56330 1 1 97402 26 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 56334 0 1 97406 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
D Linux 57203 0 1 98275 25 16 84116272
EXT3 Large file Sparse superblock Recover, 43 GB / 40 GiB
Results
interface_write()
No partition found or selected for recovery
simulate write!
write_mbr_i386: starting...
Store new MBR code
write_all_log_i386: starting...
No extended partition
Interface Advanced
Disk /dev/sdb - 60 GB / 55 GiB
Partition table type: Intel
Disk /dev/sdb - 60 GB / 55 GiB
Partition table type: Intel
New options :
Dump : No
Cylinder boundary : Yes
Allow partial last cylinder : No
Expert mode : No
New options :
Dump : No
Cylinder boundary : Yes
Allow partial last cylinder : No
Expert mode : No
Analyse Disk /dev/sdb - 60 GB / 55 GiB - CHS 57232 64 32
Current partition structure:
Partition sector doesn't have the endmark 0xAA55
Ask the user for vista mode
Allow partial last cylinder : No
search_vista_part: 0
search_part()
Disk /dev/sdb - 60 GB / 55 GiB - CHS 57232 64 32
Search for partition aborted
Results
Can't open backup.log file: No such file or directory
interface_load
interface_write()
No partition found or selected for recovery
simulate write!
write_mbr_i386: starting...
Store new MBR code
write_all_log_i386: starting...
No extended partition
TestDisk exited normally.
fixdisktable first output:
% ./fixdisktable -d /dev/sdb
Getting hard disk geometry
cylinders=57231, heads=64, sectors=32
end_offset: 2147482624
FfEeSsNnBbUuFfEeSsNnBbUuFfEeSsNnBbUuFfEeSsNnBbUuFfEeSsNnBbUuFfEeSsNnBbUuFfEeSsNnBbUuFfEeSs
EXT2 partition at offset 56832, length=(41072.398 MB) 43067531264
Sectors: start= 111, end= 84116382, length= 84116272
Hd,Sec,Cyl: start(3,16,0) end(28,31,41072)
Done searching for partitions.
Nr AF Hd Sec Cyl Hd Sec Cyl Start Size ID
1 80 3 16 0 63 32 1023 111 84116273 83 (Interpretted)
1 80 3 16 0 63 224 255 111 84116273 83 (RAW)
1: 8003 1000 833f e0ff 6f00 0000 3183 0305
2: 0000 0000 0000 0000 0000 0000 0000 0000
3: 0000 0000 0000 0000 0000 0000 0000 0000
4: 0000 0000 0000 0000 0000 0000 0000 0000
Do you wish to write this partition table to disk (yes/no)? no
fixdisktable second output:
% ./fixdisktable -d -r -v /dev/sdb
Getting hard disk geometry
cylinders=57231, heads=64, sectors=32
end_offset: 2147482624
FfEeSsNnBbUuFfEeSsNnBbUuFfEeSsNnBbUuFfEeSsNnBbUuFfEeSsNnBbUuFfEeSsNnBbUuFfEeSsNnBbUuFfEeSs
NTFS partition at offset 17483776, length=(17592186043512.582 MB) 184467440727622 49216
Sectors: start= 34148, end=36028797017147916, length=36028797017113768
Hd,Sec,Cyl: start(43,5,16) end(16,12,2096265)
Done searching for partitions.
Nr AF Hd Sec Cyl Hd Sec Cyl Start Size ID
1 80 43 5 16 63 32 1023 34148 -1850199 07 (Interpretted)
1 80 43 5 16 63 224 255 34148 -1850199 07 (RAW)
1: 802b 0510 073f e0ff 6485 0000 a9c4 e3ff
2: 0000 0000 0000 0000 0000 0000 0000 0000
3: 0000 0000 0000 0000 0000 0000 0000 0000
4: 0000 0000 0000 0000 0000 0000 0000 0000
Do you wish to write this partition table to disk (yes/no)? no
This string "FfEeSsNnBbUu" was repeating there for longer time and it was most probably related to debugging or a verbose mode of fixdisktable
As I have mentioned I will try to connect the disk directly to the computer and see what will happen.
Shall I try to correct the disk geometry to the one specified by manufacturer? Is it possible?
Any suggestions? -
Please Help: A Problem With Oracle-Provided 'Working' Example
A Problem With Oracle-Provided 'Working' Example Using htp.formcheckbox
I followed the simple steps in the Oracle-provided example:
Doc ID: Note:116534.1
Subject: How to use checkbox in webdb for bulk update using webdb report
However, when I select a checkbox and click on the Update button, I get a "ORA-01036: illegal variable name/number" error. Please advise. This was a very promising feature.
Fred
Below are step-by-step instructions provided by Oracle to create this "working" example:
How to use a checkbox in WEBDB 2.2 report for bulk update.
PURPOSE
This article shows how checkbox can used placed on WEBDB report
and how to use it.
SCOPE & APPLICATION
The following example to guide through the steps to create a working
example of this.
In this example, the checkbox is used to select the records. On clicking
the update button, the pl/sql procedure is called which will update col1 to
the string 'OK'.
After the update is done, the PL/SQL procedure calls the report again.
Since the report only select records where col1 is null, the updated
records will not be displayed when the report is called again.
Step 1 - Create Table
From Sqlplus, log in as scott/tiger and execute the following:
drop table chkbox_example;
create table chkbox_example
(id varchar2(10) not null,
comments varchar2(20),
col1 varchar2(10));
Step 2 - Insert Test Data
From Sqlplus, still logged in as scott/tiger , execute the following:
declare
l_i number;
begin
for l_i in 1..50 loop
insert into chkbox_example values (l_i, 'Comments ' || l_i , NULL);
end loop;
commit;
end;
Step 3 -Create SQL Query based WEBDB report
Logon to a WEBDB site which has access to the database the above tables are created.
Create a SQL based Report.
Name the report :RPT_CHKBOX
The select statement for the report is :
select c.id, c.comments, c.col1,
htf.formcheckbox('p_qty',c.id) Tick
from SCOTT.chkbox_example c
where c.col1 is null
In Advanced PL/SQL, (REPORT, Before displaying the form), put the following code
htp.formOpen('scott.chkbox_process');
htp.formsubmit('p_request','Update');
htp.br;
htp.br;
Step 4 - Create a stored procedure in the database
Log on to the database as scott/tiger and execute the following to create the
procedure.
Note: Replace WEBDB to the appropriate webdb user for your installation.
In my database, I had installed webdb using WEBDB username, hence user webdb owns
the packages.
create or replace procedure chkbox_process
( p_request in varchar2 default null,
p_qty in wwv_utl_api_types.vc_arr ,
p_arg_names in wwv_utl_api_types.vc_arr ,
p_arg_values in wwv_utl_api_types.vc_arr
is
i number;
begin
for i in 1..p_qty.count loop
if p_qty(i) is not null then
begin
update chkbox_example
set col1 = 'OK'
where chkbox_example.id = p_qty(i);
end;
end if;
end loop;
commit;
/* To Call Report again after updating */
SCOTT.RPT_CHKBOX.show
(p_request=>'Run Report',
p_arg_names=>webdb.wwv_standard_util.string_to_table2(' '),
p_arg_values=>webdb.wwv_standard_util.string_to_table2(' '));
end;
Summary
There are essentially 2 main modules, The WEBDB report and the pl/sql procedure (chkbox_process)
A button is created via the advanced pl/sql coding which shows on top of the report. (The
button cannot be placed at the bottom of the report due to the way WEBDB creates the procedure
internally)
When any button is clicked on the report, it calls the pl/sql procedure chkbox_process.
The procedure is called , WEBDB always passes the parameters p_request,p_arg_names and o_arg_values.
p_qty is another parameter that we are passing additionally, This comes from the checkbox created
using the htf.formcheckbox in the report select statement.
The pl/sql procedure calls the report again after processing. This is done to
show how to call the report.
Restrictions:
-The Next and Prev buttons on the report will not work.
So it is important that the report can fit in 1 page only.
(This may mean that you will not select(not ticked) 'Paginate' under
'Display Option' in the WEBDB report. If you do this,
then in Step 4, remove p_arg_names and p_arg_values as input parameters
to the chkbox_process)If your not so sure you can use the instanceof
insurance,
Object o = evt.getSource();
if (o instanceof Button) {
Button source = (Button) o;
I haven't thoroughly read the thread, but I use something like this:if (evt.getSource() == someObjRef) {
// do that voodoo
]I haven't looked into why you'd be creating a new reference... -
Problems with Apple Photo Services
I've been having problems with Apple's Photo Services. The first time, I'm pretty sure I didn't crop some pics correctly, but they were gracious enough to refund my $$. However, then I got serious about cropping in Aperture 2, and double-checked everything before submitting them. The 20 x 30s were fine, but the 16 x 20s were all screwed up. My $ was refunded for these, but I can't figure out whether the problem is mine, or in the transmission of the pics, or something that went wrong in the Photo Service. When I complain, they always send back a small pdf of what they claim they received, and it always matches what they printed. But that's not what I sent them.
Anyone else have these problems? Might it be a bug in Aperture 2?
SeanWhen I do a bulk export from Aperture 2, to anywhere, (flickr, Photoshelter, a set of files, gallery), randomly, (about 2 in thirty images are affected).
The effect is stretched non-centerd cropping. I think it only affects random cropped images. Not all cropped images, but I am not sure about non-cropped images. ...
The extra effort to go back to the export, after the fact, just to check up on the quality exported images is a waste of my time, but now made necessary.
The thing that bothers me the most is that selecting the images that have been exported with the problem and then re-exporting those images produces, for me, the export I originally expected without the errors. -
Problem with the cache hit ratio
Hello,
I ma having a problem with the cache hit ratio I am geting. I am sure, 100% sure, that something has got to be wrong with the cache hit ratio I am fetching!
1) I will post the code that I am using to retrieve the cache hit ratio. I've seen about a thousand different equations, all equivalent in the end.
In oracle cache hit ratio seems to be:
cache hits / cache lookups,
where cache hits <=> logica IO - physical reads
cache lookups <=> logical IO
Now some people use the session logical Reads stat, from teh view v$sysstat; others use db block gets + db consistent gets; whatever. At the end of the day its all the same, and this is what i Use:
SELECT (P1.value + P2.value - P3.value) AS CACHE_HITS, (P1.value + P2.value) AS CACHE_LOOKUPS, P4.value AS MAX_BUFFS_SIZEB
FROM v$sysstat P1, v$sysstat P2, v$sysstat P3, V$PARAMETER P4
WHERE
P1.name = 'db block gets' AND
P2.name = 'consistent gets' AND
P3.name = 'physical reads' AND
P4.name = 'sga_max_size'
2) The problem:
The cache hit ratio I am retrieving cannot be correct. In this case i was benchamarking a HUGELY inneficient query, consisting of the Union of 5 Projections over the same source table, and Oracle is configured with a relatively small SGA of 300 MB. They query plan is awful, the database will read the source database table 5 times.
And I can see in the physical data statistics of the source tablespace, that total Bytes read is aproximatly 5 times the size of the text file that I used to bulk load data into the databse.
Some of the relevant stats, wait events:
db file scattered read 1129,93 seconds
Elapsed time: 1311,9 seconds
CPU time: 179,84
SGA max Size: 314572800 Bytes
And total bytes read: 77771964416 B (aproximatly 72 Gga bytes)
the source txt loaded to the database was aprox 16 G
Number of reads was like 4.5 times the source datafile.
I would say this, given the difference between CPU time and Elapsed Time, it is clear that the query spent almost all of its time doin DB file scattered reads. How is it possible that i get the following cache hit ratio:
Cache hit Ratio: 0,92
Cache hits: 109680186
Cache lookups: 119173819
I mean only 8% of that Logical I/O corresponded to physical I/O? It is just not possible.
3) Procedure of taking stats:
Now to retrieve these stats I snapshot the system 2 times. One before the query, one after the query.
But: this is not done in a single session. In total 3 sessions are created. One session two retrieve the stats before the query, one session to run the query, a last session to snapshot after the query.
Could the problem, assuming there is one, be related to this:
"The V$SESSTAT view contains statistics on a per-session basis and is only valid for the session currently connected. When a session disconnects all statistics for the session are updated in V$SYSSTAT. The values for the statistics are cleared until the next session uses them."
What does this paragraph mean. Does it mean that the v$sysstat only shows you the stats of the last session that closed? Or does it mean thtat v$sysstat is increamented with the statistics of each v$sessionstat once a session terminates? If so, then my procedure for gathering those stats should be correct.
Can anyone help me sort out the origin of such a high cache hit ratio, with so much I/O being done?sono99 wrote:
Hi,s
first of, let me start by saying that there were many things in your post that you mentioned that I could no understand. 1. Because i am not an Oracle Expert, i use whatever RDBMS whenever i need to. 2. Because another problem has come up and, right now, i cannot inform myself to be able to comprehend it all.Well, could it be that you need to understand the database you are working on in order to comprehend it? That is why we strongly advise you to read the concepts manual first, you need to understand the architecture that Oracle uses, as well as the basic concepts of how oracle does locking and maintains read consistency. It does these different than other database engines, and some things become nonsense if looked at from the viewpoint of a single user.
>
quote:
It would be useful to see the execution plan jhust in case you have simplified the problem so much that a critical detail is missing.
First, the query code:
2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:>SQL> CREATE TABLE FAVFRIEND
[2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 2 NOLOGGING TABLESPACE TARGET
[2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 3 AS
[2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 4 SELECT ID as USRID, FAVF1 as FAVF FROM PROFILE
[2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 5 UNION ALL
[2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 6 SELECT ID as USRID, FAVF2 AS FAVF FROM PROFILE
[2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 7 UNION ALL
[2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 8 SELECT ID as USRID, FAVF3 AS FAVF FROM PROFILE
[2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 9 UNION ALL
[2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 10 SELECT ID as USRID, FAVF4 AS FAVF FROM PROFILE
[2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 11 UNION ALL
[2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 12 SELECT ID as USRID, FAVF5 AS FAVF FROM PROFILE
[2009-10-20 15:11:59,141 INFO]: OUTPUT_GOBLER:> 13 ;
Now, Althought it is clear from the query that the statement is executed with the NOLOGGiNG, i have disabled the logging entirely for the tablespace.There are certain rules about nologging that may not be obvious. Again, this derives from the basic Oracle architecture, and if you use the wrong definitions of things like logging, you will be led down the primrose path to confusion.
>
Futhermore, yes, the RDBMS is a test RDBMS... I have droped the database a few times... And I am constantly deleting an re-inserting data into the source database table named PROFILE.>
I also make sure do check all the datafile statistics, and for this query the amount of RedoLog, Undo "Log", Templife used is negligible, practically zero.Create table is DDL, which has implied commits before and afterwards. There is a lot going on, some of it dependent on the volume of data returned. The Oracle database writer writes things out when it feels like it, there are situations where it might just leave it in memory for a while. With nologging, Oracle may not care that you can't perform recovery if it is interrupted. So you might want to look into statspack or EM to tell you what is going on, the datafile statistics may not be all that informative for this case.
>
Most of the I/O is reading, a few of the I/O is writing.
My idea is not to optimize this query, it is to understand how it performs. Well, have you read the Concepts manual?
I have other implementations to test, namely I having trouble with one of them.
Furthermore, I doubt the query Plan Oracle is using actually involves tablescans (as I I'd like it to do); because in the Wait Events, most of the wait time for this query is spent doing "db file scattered read". And I think this is different from a tablescan.Please look up the definition of [db file scattered read|http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm#sthref703].
>
Do you really have to use sessions external to the query session ? Can you query v$mystat joined to v$statname from the session itself.
No, I don't want to that!
I avoid as much as possible having the code I execute being implemented in java. Why do you think java has anything to do with this? In your session, desc v$mystat and v$statname, these are views you can look at.
When i can avoid it I don't query the database directly through JDBC, i use the RDBMS command line client, which is supposed to be very robust. Er, is that sqlplus?
So yes, I only connect to the database with JDBC... in very last session.
Of course, I Could Have put both the gather stats before query and gathers stats after query a single script: the script that would be also runing the query.
But that would cause me a number of problems, namely some of the SQL i build has to be implemented dynamically. And I don't want to be replicating the snapshoting code into every query script I make. This way I have one sql with the snapshoting scripts; and multiple scripts for running each query. I avoid code replication in this manner.Instrumentation is a large subject; dynamic sql generation is something to be avoided if possible. Remember, Oracle is written with the idea that many people are going to be sharing code and the database, so it is optimized in that way. For SQL parsing in particular, if every SQL is different, you get a performance problem called "hard parsing." You can (and generally should, and sometimes can't avoid) use bind variables so that Oracle doesn't need to hard parse every SQL. In fact, this is one of those things that applies to other engines besides Oracle. I would recommend you read Tom Kyte's books, he explains what is going on in detail, including in some places the non-Oracle viewpoint.
>
Furthermore, Since the database is not a production database, it is there so I can do my tests. I don't have to be concerned with what other sessions may be doing to my system. There are only the sessions I control.No, there are sessions Oracle controls. If you are on unix, you can easily see this, but there are ways to see it on Windows, too. In some cases, your own sessions can affect themselves.
>
then what it the array fetch size ? If the array fetch size is large enough the number of block visits would be similar to the number of physical block reads.
I don't know what the arraysize you mention is. i have not touched that parameter. So whatever it is, it's the default.You should find out! You can go to http://tahiti.oracle.com and type array fetch size into the search box. You can also go to http://asktom.oracle.com and do the same thing, with some more interesting detail.
>
By the way, I don't get the query results into my client, the query results are dumped into a target output table.
So, if the arraysize has something to do with the number of rows that Oracle is returning the client in each step... I think it doesn't matter.You may hear this phrase a lot:
"It depends."
>
As for the query plan, If i am not mistaken you can't get get query plans for queries that are: create table as select.What?
JG@TTST> explain plan for create table jjj as select * from product_master;
Explained.
JG@TTST> select count(*) from plan_table;
COUNT(*)
3
I can however commit the create table part and just call for the evalution of the Select part of the query; i believe it should be same.
"Optimizer" "Cost" "Cardinality" "Bytes" "Partition Start" "Partition Stop" "Partition Id" "ACCESS PREDICATES" "FILTER PREDICATES"
"SELECT STATEMENT" "ALL_ROWS" "2563" "586110" "15238860" "" "" "" "" ""
"UNION-ALL" "" "" "" "" "" "" "" "" ""
"TABLE ACCESS(FULL) SONO99.PROFILE" "" "512" "117222" "3047772" "" "" "" "" ""
"TABLE ACCESS(FULL) SONO99.PROFILE" "" "513" "117222" "3047772" "" "" "" "" ""
"TABLE ACCESS(FULL) SONO99.PROFILE" "" "513" "117222" "3047772" "" "" "" "" ""
"TABLE ACCESS(FULL) SONO99.PROFILE" "" "513" "117222" "3047772" "" "" "" "" ""
"TABLE ACCESS(FULL) SONO99.PROFILE" "" "513" "117222" "3047772" "" "" "" "" ""
This query plan was taken from sql developer, exported to txt, and the PROFILE table here has only 100k tuples.
Right now I am more concerned with testing the MODEL query. Which Oracle doesn't seem to be able any more... but that is a matter for another thread.
Regarding this plan. The Union ALL seems to be more than just a binary Operator... IT seems to be Neray.
The union all on that execution plan seems to be taking as leaf tables 5 99sono.Profile tables, and be making a table scan to them all. So I'd say that the RDBMS should only scan each database block once and not 5 times.
But: It doesn't seem to be so. IT seems like what oracle is doing is scanning completly each the table, and then moving on to next select statement in the UNION ALL. Because given the amount of source table that was read, 5 times greater than the size of the source table. Oracle didn't reuse read blocks.
But this is just my feeling.Your feeling is uninteresting. Telling us what you really hope to accomplish might be more interesting.
Anyway, in terms of consistent gets, how many consistent gets should the RDBMS be doing? 5
One for each table block?It depends.
>
My best regards,
Nuno (99sono xp). -
Problem with String to Int conversion
Dear Friends,
Problem with String to Int conversion
I am having a column where most of the values are numeric. Only 4 values are non numeric.
I have replaces those non numeric values to numeric in order to maintain the data type.
CASE Grade.Grade WHEN 'E4' THEN '24' WHEN 'E3' THEN '23' WHEN 'E2' THEN '22' WHEN 'E1' THEN '21' ELSE Grade.Grade END
This comes the result as down
Grade
_0_
_1_
_10_
_11_
_12_
_13_
_14_
_15_
_16_
_17_
_18_
_19_
_2_
_20_
_21_
_22_
_23_
_24_
_3_
_4_
_5_
_6_
_7_
_8_
_9_
Refresh
Now I want to convert this value to numeric and do some calculation
So I changed the formula as below
cast (CASE Grade.Grade WHEN 'E4' THEN '24' WHEN 'E3' THEN '23' WHEN 'E2' THEN '22' WHEN 'E1' THEN '21' ELSE Grade.Grade END as INT)
Now I get the following error
View Display Error
_ Odbc driver returned an error (SQLExecDirectW)._
Error Details
Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
_State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 17001] Oracle Error code: 1722, message: ORA-01722: invalid number at OCI call OCIStmtFetch. [nQSError: 17012] Bulk fetch failed. (HY000)_
SQL Issued: SELECT cast ( CASE Grade.Grade WHEN 'E4' THEN '24' WHEN 'E3' THEN '23' WHEN 'E2' THEN '22' WHEN 'E1' THEN '21' ELSE Grade.Grade END as Int) saw0 FROM "Human Capital - Manpower Costing" WHERE LENGTH(CASE Grade.Grade WHEN 'E1' THEN '20' WHEN 'E2' THEN '21' WHEN 'E3' THEN '22' WHEN 'E4' THEN '23' ELSE Grade.Grade END) > 0 ORDER BY saw_0_
Refresh
Could anybody help me
Regards
Mustafa
Edited by: Musnet on Jun 29, 2010 5:42 AM
Edited by: Musnet on Jun 29, 2010 6:48 AMDear Kart,
This give me another hint, Yes you are right. There was one row which returns neither blank nor any value.
I have done the code like following and it works fine
Thanks again for your support
Regards
Code: cast (CASE (CASE WHEN Length(Grade.Grade)=0 THEN '--' ELSE Grade.Grade END) WHEN 'E4' THEN '24' WHEN 'E3' THEN '23' WHEN 'E2' THEN '22' WHEN 'E1' THEN '21' when '--' then '-1' ELSE Grade.Grade END as Int) -
Replication problems with slow ( 20kB/s line)
Yesterday we noticed on our production database that replication did not run anymore because of an error in one record. This forced us to re-iniitalize the subscription. Maybe this is exactly what we should not have done but we did . The result was that
production stopped because of the first part of the replication to dump all the tables and make new empty ones. The problem is that this does not seem a good idea for a slow connection between 2 machines. Better would be that we could just restart the replication
from a backup but as far as I know this is not possible.
So my question is now what can we do to speed up the replication if there is no way that we can improve the line quality. (One server is in NL and the other in CN, and as everybody know there is the great big firewall in Bejing which controlls all the internet
traffic. )
Is there a way to prevent replication from emptying the tables and fill them again.
Many thanks Peter KaldenbergI know that probably there is a problem with our firewalls but the fact remains that we are facing with slow connections so I ques what can we do at SQL level to speed replication or better not to have to do bulk copy over the net. In-fact why does SQL not
use some kind of delta protocol. When only data is involved this should work. Of course when the layout of tables has been changed then it makes sense the tables has to be rebuild but not for data.
Maybe a solution could be to use check-sum checking and only update these records which are different and of course add the missing once.
Additional why does the replication not give information how far the bulk copy is at least then the user knows its still working. Now what ever setting I change after 10 min the replication does not give any useful information. Which results in that the
user is thinking that the replication does not work anymore.
What about 3rd party replication tools are they any better?
Peter
Maybe you are looking for
-
Can't open fcp file-HELP!!!
after working on a tracksheet with no problems for a month it suddenly won't open - at about 27% open, it stops loading and i get general error (41) - any ideas? HELP!!!!!!! - I've GOT to get in this thing! thx!
-
After upgrade to 10.8 does not import movies
Hi, I use Final Cut Pro V 7, MPEG Streamclip and VLC to import video files. I have very recently upgraded my OSX to 10.8.4 and although it plays my movie clips (downloaded from my SD card to hard drive) in Quicktime and VLC and it will not play afte
-
Deploying ear file to Web AS server
Hi, I am trying to deploy the ear file from Netweaver Developer Studio using "Administrator " user. After Successfull deployment I logon to Enterprise portal server, immediatelty asked passwd expired & change password.I have changed that! Then I logg
-
App Store updates preload the wrong Apple ID
So I upgraded the family iMac to Mavericks, and now I'm trying to update associated programs, iphoto imovie, etc. When I click update the app store preloads an app store account from this computer's previous owner, and asks for his password. The ap
-
Regarding purchase order pricing conditions using exits or badi
Hi friends can anybody help me and let me know the solution. Please explain how the change in the pricing condition amount value should update the cost in PO. and which EXITS or BADI use for . if anybody have the solution send me reply AS SOON AS Th