Jpub generated collections and SLOW RETRIEVAL
Oracle 8.1.6 EE
SQLJ/JDBC 8.1.7
I use jpub to generate oracle specific usertypes (i.e) -usertypes=oracle. All the classes generate fine and I can use them in my code. The problem is that when I try to use the getArray() or getElement() methods from the generated collection class, it is REALLY SLOW. On the order of two minutes to retrieve 30 records. Using a test harness in SQLPLUS the retrieval is fast. On the order of milliseconds.
I call a stored procedure that returns the array of objects.
The object looks like this ...
CREATE OR REPLACE TYPE account_item AS OBJECT
id number,
name varchar2(200),
tag_id varchar2(50),
state varchar2(20),
zip varchar2(20),
primary_contact varchar2(200),
phone varchar2(20),
status varchar2(50),
broker varchar(200)
The collection type looks like ...
CREATE OR REPLACE TYPE account_item_list AS TABLE OF account_item
Does anyone from the jdbc/sql group have any idea why this would be happening ??
Thanks.
Joe
null
Ad (1): No idea. Retrieving 9 records each with a nested table of 30 items is practically instantaneous. (Using 9.0.1 client and server and OCI.) Are you using thin or OCI JDBC? Maybe there is an issue connecting between an 8.1.7 client and a 8.1.6 server? (The 8.1.6 JPub runtime had bad performance. 8.1.7 is much improved and should be about equal with 9.0.1.)
Ad (2): With the SQL definitions of account_item and account_item_list and the following table in the scott/tiger schema:
create table accounts (id number, account account_item_list)
nested table account store as accounts_nested_table;
you can run JPublisher as follows:
jpub -user=scott/tiger -sql=account_item:AccountItem,account_item_list:AccountItemList
Then use the following program TestAccount.sqlj (can't resist SQLJ here):
import java.sql.SQLException;
import oracle.sqlj.runtime.Oracle;
import sqlj.runtime.ResultSetIterator;
public class TestAccount
{ #sql public static iterator Iter (int id, AccountItemList account);
public static void main(String[] args) throws SQLException
{ Oracle.connect("jdbc:oracle:oci:@","scott","tiger");
Iter it;
#sql it = { select id, account from accounts };
while (it.next())
printList(it.id(), it.account().getArray());
it.close(); Oracle.close();
private static void printList(int id, AccountItem[] items) throws SQLException
{ System.out.print("List "+id+" [");
for (int i=0; i<items.length; i++)
{ System.out.print(items[i].getId());
if (i < items.length-1 ) System.out.print(",");
System.out.println("]");
Compile everything with:
sqlj *.java *.sqlj
And then run:
java TestAccount
Similar Messages
-
How can I generate and/or retrieve log files from iPad
How can I generate and/or retrieve log files from iPad?
OBS!
There are NO files apearing in ~/Library/Logs/CrashReporter/MobileDevice/<name of iPad> so where else can i find it?
I want to force it to produce a log, or find it within the iPad.
It is needed for support of an app.Not sure on porting out the log data, but you can find it under General->About->Diagnostic&Usage->Diagnostic&Usage Data. It will give you a list of your log data, and you can get additional details by selecting the applicable log you are looking for. Hope this helps.
-
TFS 2010: Query over all Collections and Team Projects
Hi,
is it possible to generate a query which queries all collections and team project on a TFS?
For example if I want to know all work items assigned to me. Actually I can only query over a collection and not over the complete TFS.
How can I do this?
Thanks,
MatHi Mat,
Thank you for your post.
You may need write a TFS API to list all team project collections in server. For detail information, you can refer to Taylaf's blog
Retrieve the List of Team Project Collections from TFS 2010 Client APIs and Paul's blog
List all TFS Project Collections and Projects.
I hope this information will help resolve this issue.
If anything is unclear, please free feel to let me know.
Regards,
Lily Wu
MSDN Community Support | Feedback to us
Develop and promote your apps in Windows Store
Please remember to mark the replies as answers if they help and unmark them if they provide no help. -
How to add a new user property and then retrieve it from a portlet
Trying to add a user property and then retrieve it form a remote web service?
Add a user property and map it
1. Create a property2. Go to Global Object Property Map3. Go to users, edit and select the new property.4. Go to User Profile Manager5. For portlets, go to the "Position Information" section and add it. (for the purpose of this test, add it to the profile section as well)6. Under the "User Profile Manager" go to the "User Information - Property Map" step in the wizard and 7. Go to the "User Information Attribute" and add the property.8. Click on the pencil to the right of it and give it a name (The name is what's going to appear in the list of user information under the portlet web service)9. Click finish10. Now create/edit the web service for the portlet from which you want to displays user properties. 11. Under the "User Information", click "add existing user Info" and select the property you want.12. From the portal toolbar, edit the user profile under "My Account" and then "Edit user Profile" and give the new property a value. 13. Test code below: ================================in C# IPortletContext context = PortletContextFactory.CreatePortletContext(Request,Response);IPortletRequest portletRequest = context.GetRequest();System.Collections.IDictionary UserInfoVariables = portletRequest.GetSettingCollection(SettingType.UserInfo);System.Collections.IDictionaryEnumerator UserInfo = UserInfoVariables.GetEnumerator();
while(UserInfo.MoveNext()){ //to display in a listbox ListBox1.ClearSelection(); ListBox1.Items.Add(UserInfo.Key.ToString() + ": " + UserInfo.Value);}===========================in ASP: <%Dim objSettings, dUserInfo, sEmpIDSet objSettings = Server.CreateObject("GSServices.Settings") ' get the user info settings, get employee ID from user infoSet dUserInfo = objSettings.GetUserInfoSettings
for each item in dUserInfo response.Write "<BR>" & item & ": " & dUserInfo(item)next%>IPortletContext portletContext = PortletContextFactory.createPortletContext(req, res);
IPortletRequest portletReq = portletContext.getRequest();
String value = portletReq.getSettingValue(SettingType.Portlet,settingName); -
Using bulk collect and for all to solve a problem
Hi All
I have a following problem.
Please forgive me if its a stupid question :-) im learning.
1: Data in a staging table xx_staging_table
2: two Target table t1, t2 where some columns from xx_staging_table are inserted into
Some of the columns from the staging table data are checked for valid entries and then some columns from that row will be loaded into the two target tables.
The two target tables use different set of columns from the staging table
When I had a thousand records there was no problem with a direct insert but it seems we will now have half a million records.
This has slowed down the process considerably.
My question is
Can I use the bulk collect and for all functionality to get specific columns from a staging table, then validate the row using those columns
and then use a bulk insert to load the data into a specific table.?
So code would be like
get_staging_data cursor will have all the columns i need from the staging table
cursor get_staging_data
is select * from xx_staging_table (about 500000) records
Use bulk collect to load about 10000 or so records into a plsql table
and then do a bulk insert like this
CREATE TABLE t1 AS SELECT * FROM all_objects WHERE 1 = 2;
CREATE OR REPLACE PROCEDURE test_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
IS
TYPE ARRAY IS TABLE OF all_objects%ROWTYPE;
l_data ARRAY;
CURSOR c IS SELECT * FROM all_objects;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
FORALL i IN 1..l_data.COUNT
INSERT INTO t1 VALUES l_data(i);
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
END test_proc;
In the above example t1 and the cursor have the same number of columns
In my case the columns in the cursor loop are a small subset of the columns of table t1
so can i use a forall to load that subset into the table t1? How does that work?
Thanks
Juser7348303 wrote:
checking if the value is valid and theres also some conditional processing rules ( such as if the value is a certain value no inserts are needed)
which are a little more complex than I can put in a simpleWell, if the processing is too complex (and conditional) to be done in SQL, then doing that in PL/SQL is justified... but will be slower as you are now introducing an additional layer. Data now needs to travel between the SQL layer and PL/SQL layer. This is slower.
PL/SQL is inherently serialised - and this also effects performance and scalability. PL/SQL cannot be parallelised by Oracle in an automated fashion. SQL processes can.
To put in in simple terms. You create PL/SQL procedure Foo that processes SQL cursor and you execute that proc. Oracle cannot run multiple parallel copies of Foo. It perhaps can parallelise that SQL cursor that Foo uses - but not Foo itself.
However, if Foo is called by the SQL engine it can run in parallel - as the SQL process calling Foo is running in parallel. So if you make Foo a pipeline table function (written in PL/SQL), and you design and code it as a thread-safe/parallel enabled function, it can be callled and used and executed in parallel, by the SQL engine.
So moving your PL/SQL code into a parallel enabled pipeline function written in PL/SQL, and using that function via parallel SQL, can increase performance over running that same basic PL/SQL processing as a serialised process.
This is of course assuming that the processing that needs to be done using PL/SQL code, can be designed and coded for parallel processing in this fashion. -
Dead stock and slow moving material
Hello,
I need to find out how to get dead stock and slow moving materail manually from tables.(instead of using MC46, & MC50).
Could you please tell me the logic and tables needed to retrieve this data.
regards
Gaurav Mainiuse table mseg and mver
logic is calculation of frequency of movements
regadsr -
BULK COLLECT and FORALL with dynamic INSERT.
Hello,
I want to apply BULK COLLECT and FORALL feature for a insert statement in my procedure for performance improvements as it has to insert a huge amount of data.
But the problem is that the insert statement gets generated dynamically and even the table name is found at the run-time ... so i am not able to apply the performance tuning concepts.
See below the code
PROCEDURE STP_MES_INSERT_GLOBAL_TO_MAIN
(P_IN_SRC_TABLE_NAME VARCHAR2 ,
P_IN_TRG_TABLE_NAME VARCHAR2 ,
P_IN_ED_TRIG_ALARM_ID NUMBER ,
P_IN_ED_CATG_ID NUMBER ,
P_IN_IS_PIECEID_ALARM IN CHAR,
P_IN_IS_LAST_RECORD IN CHAR
IS
V_START_DATA_ID NUMBER;
V_STOP_DATA_ID NUMBER;
V_FROM_DATA_ID NUMBER;
V_TO_DATA_ID NUMBER;
V_MAX_REC_IN_LOOP NUMBER := 30000;
V_QRY1 VARCHAR2(32767);
BEGIN
EXECUTE IMMEDIATE 'SELECT MIN(ED_DATA_ID), MAX(ED_DATA_ID) FROM '|| P_IN_SRC_TABLE_NAME INTO V_START_DATA_ID , V_STOP_DATA_ID;
--DBMS_OUTPUT.PUT_LINE('ORIGINAL START ID := '||V_START_DATA_ID ||' ORIGINAL STOP ID := ' || V_STOP_DATA_ID);
V_FROM_DATA_ID := V_START_DATA_ID ;
IF (V_STOP_DATA_ID - V_START_DATA_ID ) > V_MAX_REC_IN_LOOP THEN
V_TO_DATA_ID := V_START_DATA_ID + V_MAX_REC_IN_LOOP;
ELSE
V_TO_DATA_ID := V_STOP_DATA_ID;
END IF;
LOOP
BEGIN
LOOP
V_QRY1 := ' INSERT INTO '||P_IN_TRG_TABLE_NAME||
' SELECT * FROM '||P_IN_SRC_TABLE_NAME ||
' WHERE ED_DATA_ID BETWEEN ' || V_FROM_DATA_ID ||' AND ' || V_TO_DATA_ID;
EXECUTE IMMEDIATE V_QRY1;
commit;
V_FROM_DATA_ID := V_TO_DATA_ID + 1;
IF ( V_STOP_DATA_ID - V_TO_DATA_ID > V_MAX_REC_IN_LOOP ) THEN
V_TO_DATA_ID := V_TO_DATA_ID + V_MAX_REC_IN_LOOP;
ELSE
V_TO_DATA_ID := V_TO_DATA_ID + (V_STOP_DATA_ID - V_TO_DATA_ID);
END IF;
EXCEPTION
WHEN OTHERS THEN.............
....................so on Now you can observer here that P_IN_SRC_TABLE_NAME is the source table name which we get as a parameter at run-time. I have used 2 table in the insert statement P_IN_TRG_TABLE_NAME (in which i have to insert data) and P_IN_SRC_TABLE_NAME(from where i have to insert data)
V_QRY1 := ' INSERT INTO '||P_IN_TRG_TABLE_NAME||
' SELECT * FROM '||P_IN_SRC_TABLE_NAME ||
' WHERE ED_DATA_ID BETWEEN ' || V_FROM_DATA_ID ||' AND ' || V_TO_DATA_ID;
EXECUTE IMMEDIATE V_QRY1;now when i appy the bulk collect and forall feature i am facing the out of scope problem....see the code below ...
BEGIN
EXECUTE IMMEDIATE 'SELECT MIN(ED_DATA_ID), MAX(ED_DATA_ID) FROM '|| P_IN_SRC_TABLE_NAME INTO V_START_DATA_ID , V_STOP_DATA_ID;
--DBMS_OUTPUT.PUT_LINE('ORIGINAL START ID := '||V_START_DATA_ID ||' ORIGINAL STOP ID := ' || V_STOP_DATA_ID);
V_FROM_DATA_ID := V_START_DATA_ID ;
IF (V_STOP_DATA_ID - V_START_DATA_ID ) > V_MAX_REC_IN_LOOP THEN
V_TO_DATA_ID := V_START_DATA_ID + V_MAX_REC_IN_LOOP;
ELSE
V_TO_DATA_ID := V_STOP_DATA_ID;
END IF;
LOOP
DECLARE
TYPE TRG_TABLE_TYPE IS TABLE OF P_IN_SRC_TABLE_NAME%ROWTYPE;
V_TRG_TABLE_TYPE TRG_TABLE_TYPE;
CURSOR TRG_TAB_CUR IS
SELECT * FROM P_IN_SRC_TABLE_NAME
WHERE ED_DATA_ID BETWEEN V_FROM_DATA_ID AND V_TO_DATA_ID;
V_QRY1 varchar2(32767);
BEGIN
OPEN TRG_TAB_CUR;
LOOP
FETCH TRG_TAB_CUR BULK COLLECT INTO V_TRG_TABLE_TYPE LIMIT 30000;
FORALL I IN 1..V_TRG_TABLE_TYPE.COUNT
V_QRY1 := ' INSERT INTO '||P_IN_TRG_TABLE_NAME||' VALUES V_TRG_TABLE_TYPE(I);'
EXECUTE IMMEDIATE V_QRY1;
EXIT WHEN TRG_TAB_CUR%NOTFOUND;
END LOOP;
CLOSE TRG_TAB_CUR;
V_FROM_DATA_ID := V_TO_DATA_ID + 1;
IF ( V_STOP_DATA_ID - V_TO_DATA_ID > V_MAX_REC_IN_LOOP ) THEN
V_TO_DATA_ID := V_TO_DATA_ID + V_MAX_REC_IN_LOOP;
ELSE
V_TO_DATA_ID := V_TO_DATA_ID + (V_STOP_DATA_ID - V_TO_DATA_ID);
END IF;
EXCEPTION
WHEN OTHERS THEN.........so on
But the above code is not helping me , what i am doing wrong ??? how can i tune this dynamically generated statement to use bulk collect for better performace ......
Thanks in Advance !!!!Hello,
a table name cannot be bind as a parameter in SQL, this wont't compile:
EXECUTE IMMEDIATE ' INSERT INTO :1 VALUES ......
USING P_IN_TRG_TABLE_NAME ...but this should work:
EXECUTE IMMEDIATE ' INSERT INTO ' || P_IN_TRG_TABLE_NAME || ' VALUES ......You cannot declare a type that is based on a table which name is in a variable.
PL/SQL is stronly typed language, a type must be known at compile time, a code like this is not allowed:
PROCEDURE xx( src_table_name varchar2 )
DECLARE
TYPE tab IS TABLE OF src_table_name%ROWTYPE;
...This can be done by creating one big dynamic SQL - see example below (tested on Oracle 10 XE - this is a slightly simplified version of your procedure):
CREATE OR REPLACE
PROCEDURE stp1(
p_in_src_table_name VARCHAR2 ,
p_in_trg_table_name VARCHAR2 ,
v_from_data_id NUMBER := 100,
v_to_data_id NUMBER := 100000
IS
BEGIN
EXECUTE IMMEDIATE q'{
DECLARE
TYPE trg_table_type IS TABLE OF }' || p_in_src_table_name || q'{%ROWTYPE;
V_TRG_TABLE_TYPE TRG_TABLE_TYPE;
CURSOR TRG_TAB_CUR IS
SELECT * FROM }' || p_in_src_table_name ||
q'{ WHERE ED_DATA_ID BETWEEN :V_FROM_DATA_ID AND :V_TO_DATA_ID;
BEGIN
OPEN TRG_TAB_CUR;
LOOP
FETCH TRG_TAB_CUR BULK COLLECT INTO V_TRG_TABLE_TYPE LIMIT 30000;
FORALL I IN 1 .. V_TRG_TABLE_TYPE.COUNT
INSERT INTO }' || p_in_trg_table_name || q'{ VALUES V_TRG_TABLE_TYPE( I );
EXIT WHEN TRG_TAB_CUR%NOTFOUND;
END LOOP;
CLOSE TRG_TAB_CUR;
END; }'
USING v_from_data_id, v_to_data_id;
COMMIT;
END;But this probably won't give any performace improvements. Bulk collect and forall can give performance improvements when there is a DML operation inside a loop,
and this one single DML operates on only one record or relatively small number of records, and this DML is repeated many many times in the loop.
I guess that your code is opposite to this - it contains insert statement that operates on many records (one single insert ~ 30000 records),
and you are trying to replace it with bulk collect/forall - INSERT INTO ... SELECT FROM will almost alwayst be faster than bulk collect/forall.
Look at simple test - below is a procedure that uses INSERT ... SELECT :
CREATE OR REPLACE
PROCEDURE stp(
p_in_src_table_name VARCHAR2 ,
p_in_trg_table_name VARCHAR2 ,
v_from_data_id NUMBER := 100,
v_to_data_id NUMBER := 100000
IS
V_QRY1 VARCHAR2(32767);
BEGIN
V_QRY1 := ' INSERT INTO '|| P_IN_TRG_TABLE_NAME ||
' SELECT * FROM '|| P_IN_SRC_TABLE_NAME ||
' WHERE ed_data_id BETWEEN :f AND :t ';
EXECUTE IMMEDIATE V_QRY1
USING V_FROM_DATA_ID, V_TO_DATA_ID;
COMMIT;
END;
/and we can compare both procedures:
SQL> CREATE TABLE test333
2 AS SELECT level ed_data_id ,
3 'XXX ' || LEVEL x,
4 'YYY ' || 2 * LEVEL y
5 FROM dual
6 CONNECT BY LEVEL <= 1000000;
Table created.
SQL> CREATE TABLE test333_dst AS
2 SELECT * FROM test333 WHERE 1 = 0;
Table created.
SQL> set timing on
SQL> ed
Wrote file afiedt.buf
1 BEGIN
2 FOR i IN 1 .. 100 LOOP
3 stp1( 'test333', 'test333_dst', 1000, 31000 );
4 END LOOP;
5* END;
SQL> /
PL/SQL procedure successfully completed.
Elapsed: 00:00:22.12
SQL> ed
Wrote file afiedt.buf
1 BEGIN
2 FOR i IN 1 .. 100 LOOP
3 stp( 'test333', 'test333_dst', 1000, 31000 );
4 END LOOP;
5* END;
SQL> /
PL/SQL procedure successfully completed.
Elapsed: 00:00:14.86without bulk collect ~ 15 sec.
bulk collect version ~ 22 sec. .... 7 sec longer / 15 sec. = about 45% performance decrease. -
Best practices for pass collections and to navigate between 2 view endlessl
Hello, I have a doubt about efficiency and optimization of memory in this case. I have a managedBean for to show a activities´s list, then I can select one activity and the application redirects another view. This view is controlled for another managedBean for to show the specificied activity.
My idea is pass the collection and the id of specificied activity to he second managedBean, and since second managedBean pass the collection to the first managedBean.
I had thought pass properties by request and retrieve in the second bean, but I am not sure wich scope to usea in both bean. Because, the first bean pass collection to the first again.
I also thought to use SessionScope in both bean, but I doubt about efficiency of memory in this case.
How to pass parameters is not yet defined:
-Using h:link and attributes
- Using setPropertyactionListener between both bean
-Others who do not know
First managedBean (show list)
@ManagedBean(name="actividades")
@ViewScoped I'm not sure which scope to use
public class ActividadesController implements Serializable {
private static final long serialVersionUID = 1L;
private final static Logger logger=Logger.getLogger(ActividadesController.class);
private List<Actividad> listado; All activities
@ManagedProperty(value="#{actividadBO}")
private ActividadBO actividadBo;
@ManagedProperty(value="#{asociaciones}")
private AsociacionController asociacionController;
/** methods **/
Second managedBean (specified activity)
@ManagedBean(name="actV")
@ViewScoped I'm not sure which scope to use
public class ActividadView implements Serializable {
private static final long serialVersionUID = 1L;
private Actividad actividad;
private String comentario;
private List<Actividad> listado; All activities for to avoid having to search again
@ManagedProperty(value="#{actividadBO}")
private ActividadBO actividadBo;
private Integer idActividad;
@PostConstruct
public void init(){
//actividad=actividadBo.get(idActividad);
actividad=actividadBo.get(idActividad);
actualizarComentarios(actividad.getIdActividad());
actualizarAdjuntos(actividad.getIdActividad());
/** methods **/
Any suggestions??
Kind regards.Hello, I have a doubt about efficiency and optimization of memory in this case. I have a managedBean for to show a activities´s list, then I can select one activity and the application redirects another view. This view is controlled for another managedBean for to show the specificied activity.
My idea is pass the collection and the id of specificied activity to he second managedBean, and since second managedBean pass the collection to the first managedBean.
I had thought pass properties by request and retrieve in the second bean, but I am not sure wich scope to usea in both bean. Because, the first bean pass collection to the first again.
I also thought to use SessionScope in both bean, but I doubt about efficiency of memory in this case.
How to pass parameters is not yet defined:
-Using h:link and attributes
- Using setPropertyactionListener between both bean
-Others who do not know
First managedBean (show list)
@ManagedBean(name="actividades")
@ViewScoped I'm not sure which scope to use
public class ActividadesController implements Serializable {
private static final long serialVersionUID = 1L;
private final static Logger logger=Logger.getLogger(ActividadesController.class);
private List<Actividad> listado; All activities
@ManagedProperty(value="#{actividadBO}")
private ActividadBO actividadBo;
@ManagedProperty(value="#{asociaciones}")
private AsociacionController asociacionController;
/** methods **/
Second managedBean (specified activity)
@ManagedBean(name="actV")
@ViewScoped I'm not sure which scope to use
public class ActividadView implements Serializable {
private static final long serialVersionUID = 1L;
private Actividad actividad;
private String comentario;
private List<Actividad> listado; All activities for to avoid having to search again
@ManagedProperty(value="#{actividadBO}")
private ActividadBO actividadBo;
private Integer idActividad;
@PostConstruct
public void init(){
//actividad=actividadBo.get(idActividad);
actividad=actividadBo.get(idActividad);
actualizarComentarios(actividad.getIdActividad());
actualizarAdjuntos(actividad.getIdActividad());
/** methods **/
Any suggestions??
Kind regards. -
Util to auto-generate getters and setters...
Does anyone know of a utility that automatically generate getters and setter classes from a list of variable names???
Might stop me getting RSI!i gave up on gets/sets about 2weeks after mylecturer
introduced them to us :/Giving up on gets/sets is a mistake... take it from an
EXPERIENCED programmer.you assume 2 much. Uni was a long time ago 4 me.
>
if a var can be modified, then make it public.Though
adding a get/set method does provide encapsulation,it
also requires more typing, bloats code and is alsoa
fraction slower.Adding get/set methods provide more then just the
encapsulation. It provides you easier debug not to
mention easier way to read the code.Encapsulation encapsulates the idea of ezier debuggin :]
gets/sets do not automatically give you code readability, and badly named gets/sets can detract from readability.
>
Sometimes gets/sets serve a purpose, but most ofthe
time theyre just a waste of time.If you think set/get is a waste of time your attitude
will get you into trouble. Consider code with a full
set of public variables in a 'complex' system (well,
lets just say 1500 classes).ok, you've applied my philosophy to your field, now let me apply yours to mine.
I write games for Java enabled mobile phones(J2ME MIDP1.0), on this platform code size (and memory usage) is a SERIOUS concern.
FYI. the Nokia 6310i mobile phone has approx. 140k of heap, and a jar size limited of 30k.
EVERY line of code has to be optimal, in both space and time,
The cost of gets/sets; inheritance; interfaces and all the other wonderful OO design features of java are serious performance inhibitors, and consequently are used only when absolutly necessary.
>
During development a bug is discovered and you realize
that the bug is due to a change in a specific
variable. How do you, quickly and simply, find out
what classes are changing the variable. It could be
anywhere; but by having a get and set method for that
variable you could add a simple code like "new
Exception().printStackTrace();" into the set method
and get a trace when the bug happens. This way you
would know within secondes what object is changing the
variable making the debugging easy. don't write buggy code ;] (that was a j/k btw)
btw, im curious how exactly do u realise that the bug is related to a specific variable? gets/sets help debugging, but they are not the magic bullet of debugging techniques.
>
What if you would like to override a class and to
something before or after a variable is manipulated?
This would be impossible if all variables are public.
You will loose all control of you code.you are still argueing a different point to me - the abstraction of gets/sets do serve a purpose, however they also impose a cost.
>
There are many more reasons for adding the get/set
methods but it will take me all day to write them all
here.
I say: "have all variables protected, GET OFF YOUR
ASS, and add the 200 lines of code" if not for you
then for the one that later will be using or fixing
the code.
Its quite funny watching a newbie programmer start
writing a class, they identify the classes required
attributes, then write 200lines of gets and sets
before they even consider tackling the 'hard'bit[s]
of the class :]What do think of code guidlines that are forced by
most software companies? This is more important then
most NEWBIES think; wait a few years and you will get
the point..
my point here, is that training programmers to follow guidelines before you have taught them the fundamentals of problem solving is futile.
What about comments? Do you find them funny and
useless? hope you don't...for your sake.no, all good code should be commented. But I have to admit, I don't waste time commenting code as i write it, i find it slows down my coding. However I will always go back and comment any code if it is to be used by some1 else.
>
Thinking it funny that people take the time and effort
to make their code more readable, understandable,
accessable, flexible and over all more pretty makes
you the newbie.hmm, unprovoked flaming - now whos the newbie :/
>
It scares me to think that the new breed of
programmers will think it funny to write GOOD code.
bopen, bwise, bbetter...
What frustrates me, is why good design always means slower performance.
It shouldn't, and until Java progresses to the point where the runtime cost of good design is not significant, I will still regard Java as a primitive language. -
I have the latest Creative Cloud versions of Lightroom and Lightroom Mobile on my Ipad. On my desktop I have created a collection which I have put in a custom order. On my Ipad I go into that collection and set the order to custom order, which I presume means the order I have on my desktop. However, the photos on the Ipad seem to be just random photos from the collection THEN the rest of the collection is in the correct order after that.
Do you have virtual copies in your collection?
Could you please send send me a LR Desktop +Mobile diagnostig log - best as a private message with a downloadable dropbox link
You can trigger the Lr Desktop diagnostig log via LR Desktop preferences -> Lightroom Mobile and when you hold down the alt key you will notice a generate diagnostic log button.
The Lr Mobile app log can be triggered when you open up the settings and long press the to LR Icon a diagnostic log will be generated and attached to your local mail client. While opening the settings could you double check if you are signed-in?
Thanks
Guido -
Smart collection very slow (performance)
I have a problem with the Smart Collection. They are very slow in 3.2
I did a little test and compared 2.7 with 3.2 with the same catalog.
In 2.7 catalog all the smart collections are very fast.
So I have the same catalog with the same smart collections. Often they take 5 - 10 seconds when clicked the first time.
Even a Smart collection yielding 2 images can take up to 5 - 10 seconds.
This is the Smart Collection
Name: Testing
Match: All
Field: Filename
Expression: contains
Value ANH_20090331_4253.JPG,ANH_20090331_4254.JPG,
This should result in 2 image. The full names are given.
In LR 2.7 the results are there in a split of a second. In 3.2 the first time you click on it it takes 5 - 10 seconds. The processor goes to 50% all the time.
Even a normal collection (not smart collections) with 97 images takes more than 5 seconds when clicked. This is very slow.
The second time it is within a second. But clicking another "difficult smart collection" and returning, then again it takes 5 - 10 seconds.
For the record:
Lightroom version: 3.2 [692106]
Operating system: Microsoft Windows XP Professional Service Pack 3 (Build 2600)
Version: 5.1 [2600]
Application architecture: x86
System architecture: x86
Physical processor count: 2
Processor speed: 2,9 GHz
Built-in memory: 2047,0 MB
Real memory available to Lightroom: 716,8 MB
Real memory used by Lightroom: 150,8 MB (21,0%)
Virtual memory used by Lightroom: 148,0 MB
Memory cache size: 62,4 MB
System DPI setting: 96 DPI
Displays: 1) 1280x1024
Serial Number: 116040017934919748523304
Application folder: C:\Program Files\Adobe\Adobe Photoshop Lightroom 3.2
Library Path: D:\Lightroom\FotoDatabase\FotoDatabase-3.lrcat
Settings Folder: C:\Documents and Settings\Dick\Application Data\Adobe\LightroomSmart collections are really slow for me also.
It takes well over two minutes to populate the collection's images counts when first starting lightroom. I have approx 80 smart collections, some quite complex, but even so, this seems very slow. Once the inital counts have populated, showing the images is reasonably fast.
The rest of LR is pretty fast, SC's are the only thing that bothers me.
Using LR 3.3RC, high end PC:
Win7x64
i7/860
16GB RAM
SSD for boot
SSD for LR
45,000 images in catalogue
Mike -
JAXB generated collection ickyness
Is there a way when generating JAXB beans from an XSD to direct it to use java.util collections of beans directly instead of creating custom container classes? There are various JAXB XSD extensions that can dictate some differences to the generated beans and there are some JAXB plugins too but I can't see anything that does this.
More details:
If I take a nice class looking something like the following:
class Foo {
List<Bar> getBars() ...
class Bar {
// Usage: List<Bar> bars = foo.getBars();Generate an XSD from it and then generate JAXB beans from that XSD I'll end up with something pretty much like:
class Foo {
// Ew!
Bars getBars() ...
class Bars {
// Yuk!
List<Bar> getBar() ...
class Bar {
// Usage: List<Bar> bars = foo.getBars().getBar();Pretty nasty. That Bars class obviously gets generated as a place in which to stash the properties that might be in the <bars> element of the XML (which will look something like:
<foo>
<bars> <!-- This element might have properties -->
<bar>...<bar>
<bar>...<bar>
<bar>...<bar>
</bars>
</foo>Now in the real world scenario I'm actually starting from an XSD but the problem is the same - I want to generate beans from it, I know that the container element won't have any properties and so I want the container to be a plain old list.
Obivously I could make some of the problems go away by starting from annotated Java source files and generating the XSD but that would create its own can of snakes that I'm not going to pry into.
Any ideas?
Edited by: dcminter on Apr 4, 2012 4:25 PMnlpappu,
I suggest posting the entire error message and stack trace you are getting, as well as the section of your code that is causing the error.
It may also be helpful if you mentioned the Oracle database version, java version and platform you are using.
I assume you are using SUN's reference implementation (RI) for JAXB, correct?
Good Luck,
Avi. -
Very slow retrieval of messages with IMAP
I have a problem with javamail versions greater than 1.4.4:
I use IMAP and STARTTLS to connect to Exchange Server 2010
The mail I try to read in this example has RFC822.SIZE = 2317220 but the problem occurs with all mails.
This is the code where it happens ( line 3 )
msg = (MimeMessage)currentFolder.getMessage(1);
log.debug(mailBoxID +" start reading mail");
fullMessage = new MimeMessage(msg);
log.debug(mailBoxID +" end reading mail");
With 1.4.5 and higher it takes approx. 8 minutes ( in debug mode ) to get the full mail.
With 1.4.4 it takes only 4 seconds ( also in debug mode ).
I tried 3 scenario's with 1.5.1: ( but I have the same problem with 1.4.5 and 1.4.6 )
mail.imap.fetchsize: 16384 ==> 8 minutes
mail.imap.fetchsize: 102400 ==> 7 minutes
mail.imap.partialfetch: false ==> 9 minutes
I tried 2 scenario's with 1.4.4:
mail.imap.fetchsize: 16384 ==> 4 seconds
mail.imap.partialfetch: false ==> 4 seconds
I suppose fetchsize and partialfetch are not part of the problem.
There must be some configuration issue but I don't find it.
And I have the same problem with the demo programs.
Here are some snippets from the protocol debug and the program debug for all scenario's:
DEBUG: setDebug: JavaMail version 1.5.1
DEBUG: getProvider() returning javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Oracle]
DEBUG IMAP: mail.imap.fetchsize: 16384
DEBUG IMAP: mail.imap.ignorebodystructuresize: false
DEBUG IMAP: mail.imap.statuscachetimeout: 1000
DEBUG IMAP: mail.imap.appendbuffersize: -1
DEBUG IMAP: mail.imap.minidletime: 10
DEBUG IMAP: disable AUTH=PLAIN
DEBUG IMAP: disable AUTH=NTLM
DEBUG IMAP: enable STARTTLS
==> fetches in blocks of 16384
A10 FETCH 1 (BODY[]<0.16384>)
* 1 FETCH (BODY[]<0> {16384}
FLAGS (\Seen))
A11 OK FETCH completed.
A12 FETCH 1 (BODY[]<32768.16384>)
* 1 FETCH (BODY[]<32768> {16384}
FLAGS (\Seen))
A202 OK FETCH completed.
A203 FETCH 1 (BODY[]<3162112.16384>)
* 1 FETCH (BODY[]<3162112> {11800}
FLAGS (\Seen))
A203 OK FETCH completed.
2013-11-19 13:10:31 [Thread-4] DEBUG MailHandler :67 - start reading mail
2013-11-19 13:18:31 [Thread-4] DEBUG MailHandler :69 - end reading mail
==> 8 minutes
DEBUG: setDebug: JavaMail version 1.5.1
DEBUG: getProvider() returning javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Oracle]
DEBUG IMAP: mail.imap.fetchsize: 102400
DEBUG IMAP: mail.imap.ignorebodystructuresize: false
DEBUG IMAP: mail.imap.statuscachetimeout: 1000
DEBUG IMAP: mail.imap.appendbuffersize: -1
DEBUG IMAP: mail.imap.minidletime: 10
DEBUG IMAP: disable AUTH=PLAIN
DEBUG IMAP: disable AUTH=NTLM
DEBUG IMAP: enable STARTTLS
==> fetches in blocks of 102400
FLAGS (\Seen))
A10 OK FETCH completed.
A11 FETCH 1 (BODY[]<102400.102400>)
* 1 FETCH (BODY[]<102400> {102400}
FLAGS (\Seen))
A39 OK FETCH completed.
A40 FETCH 1 (BODY[]<3072000.102400>)
* 1 FETCH (BODY[]<3072000> {101912}
FLAGS (\Seen))
A40 OK FETCH completed.
2013-11-19 14:23:42 [Thread-4] DEBUG MailHandler :67 - start reading mail
2013-11-19 14:30:30 [Thread-4] DEBUG MailHandler :69 - end reading mail
==> 7 minutes
DEBUG: setDebug: JavaMail version 1.5.1
DEBUG: getProvider() returning javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Oracle]
DEBUG IMAP: mail.imap.partialfetch: false
DEBUG IMAP: mail.imap.ignorebodystructuresize: false
DEBUG IMAP: mail.imap.statuscachetimeout: 1000
DEBUG IMAP: mail.imap.appendbuffersize: -1
DEBUG IMAP: mail.imap.minidletime: 10
DEBUG IMAP: disable AUTH=PLAIN
DEBUG IMAP: disable AUTH=NTLM
DEBUG IMAP: enable STARTTLS
==> 1 big fetch
2013-11-19 13:21:35 [Thread-4] DEBUG MailHandler :67 - start reading mail
2013-11-19 13:30:47 [Thread-4] DEBUG MailHandler :69 - end reading mail
==> 9 minutes
DEBUG: setDebug: JavaMail version 1.4.4
DEBUG: getProvider() returning javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc]
DEBUG: mail.imap.fetchsize: 16384
DEBUG: mail.imap.statuscachetimeout: 1000
DEBUG: mail.imap.appendbuffersize: -1
DEBUG: mail.imap.minidletime: 10
DEBUG: disable AUTH=PLAIN
DEBUG: disable AUTH=NTLM
DEBUG: enable STARTTLS
==> 1 big fetch
2013-11-19 13:55:47 [Thread-4] DEBUG MailHandler :67 - start reading mail
2013-11-19 13:55:51 [Thread-4] DEBUG MailHandler :69 - end reading mail
==> 4 seconds
DEBUG: setDebug: JavaMail version 1.4.4
DEBUG: getProvider() returning javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc]
DEBUG: mail.imap.partialfetch: false
DEBUG: mail.imap.statuscachetimeout: 1000
DEBUG: mail.imap.appendbuffersize: -1
DEBUG: mail.imap.minidletime: 10
DEBUG: disable AUTH=PLAIN
DEBUG: disable AUTH=NTLM
DEBUG: enable STARTTLS
==> 1 big fetch
2013-11-19 14:02:52 [Thread-4] DEBUG MailHandler :67 - start reading mail
2013-11-19 14:02:58 [Thread-4] DEBUG MailHandler :69 - end reading mail
==> 4 seconds
Here is a listing of all properties:
java.runtime.name=Java(TM) SE Runtime Environment
sun.boot.library.path=H:\java\jre6\bin
java.vm.version=20.5-b03
java.vm.vendor=Sun Microsystems Inc.
java.vendor.url=http://java.sun.com/
path.separator=;
mail.mime.decodefilename=true
java.vm.name=Java HotSpot(TM) Client VM
file.encoding.pkg=sun.io
user.country=BE
sun.java.launcher=SUN_STANDARD
sun.os.patch.level=Service Pack 2
mail.imap.auth.ntlm.disable=true
java.vm.specification.name=Java Virtual Machine Specification
user.dir=H:\EclipseSpace\DigisMailBatchNewDev
java.runtime.version=1.6.0_30-b12
java.awt.graphicsenv=sun.awt.Win32GraphicsEnvironment
mail.imap.fetchsize=102400
java.endorsed.dirs=H:\java\jre6\lib\endorsed
os.arch=x86
java.io.tmpdir=D:\Temp\
line.separator=
java.vm.specification.vendor=Sun Microsystems Inc.
user.variant=
os.name=Windows XP
sun.jnu.encoding=Cp1252
java.library.path=H:\java\jre6\bin;C:\WINDOWS\Sun\Java\...
mail.imap.auth.plain.disable=true
java.specification.name=Java Platform API Specification
java.class.version=50.0
mail.mime.address.strict=false
sun.management.compiler=HotSpot Client Compiler
os.version=5.1
user.home=C:\Documents and Settings\******
user.timezone=Europe/Paris
java.awt.printerjob=sun.awt.windows.WPrinterJob
java.specification.version=1.6
file.encoding=Cp1252
user.name=******
java.class.path=H:\EclipseSpace\DigisMailBatchNewDev;...
mail.mime.decodetext.strict=false
java.vm.specification.version=1.0
sun.arch.data.model=32
java.home=H:\java\jre6
sun.java.command=com.dexia.digis.mail.Launcher
mail.imap.partialfetch=true
java.specification.vendor=Sun Microsystems Inc.
user.language=nl
awt.toolkit=sun.awt.windows.WToolkit
java.vm.info=mixed mode
java.version=1.6.0_30
java.ext.dirs=H:\java\jre6\lib\ext;C:\WINDOWS\Sun\J...
sun.boot.class.path=H:\java\jre6\lib\resources.jar;H:\jav...
java.vendor=Sun Microsystems Inc.
file.separator=\
java.vendor.url.bug=http://java.sun.com/cgi-bin/bugreport...
mail.imap.starttls.enable=true
sun.cpu.endian=little
sun.io.unicode.encoding=UnicodeLittle
mail.mime.parameters.strict=false
sun.desktop=windows
sun.cpu.isalist=
And here is the complete protocol stacktrace up to the point where the mail is retrieved.
It's the same for all cases except the snippets above:
DEBUG IMAP: trying to connect to host "our-email-server.be", port 143, isSSL false
* OK The Microsoft Exchange IMAP4 service is ready.
A0 CAPABILITY
* CAPABILITY IMAP4 IMAP4rev1 AUTH=NTLM AUTH=GSSAPI AUTH=PLAIN STARTTLS UIDPLUS CHILDREN IDLE NAMESPACE LITERAL+
A0 OK CAPABILITY completed.
DEBUG IMAP: AUTH: NTLM
DEBUG IMAP: AUTH: GSSAPI
DEBUG IMAP: AUTH: PLAIN
DEBUG IMAP: protocolConnect login, host=our-email-server.be, user=mydomain\userid\aliasname, password=<non-null>
A1 STARTTLS
A1 OK Begin TLS negotiation now.
A2 CAPABILITY
* CAPABILITY IMAP4 IMAP4rev1 AUTH=NTLM AUTH=GSSAPI AUTH=PLAIN UIDPLUS CHILDREN IDLE NAMESPACE LITERAL+
A2 OK CAPABILITY completed.
DEBUG IMAP: AUTH: NTLM
DEBUG IMAP: AUTH: GSSAPI
DEBUG IMAP: AUTH: PLAIN
DEBUG IMAP: LOGIN command trace suppressed
DEBUG IMAP: LOGIN command result: A3 OK LOGIN completed.
A4 CAPABILITY
* CAPABILITY IMAP4 IMAP4rev1 AUTH=NTLM AUTH=GSSAPI AUTH=PLAIN UIDPLUS CHILDREN IDLE NAMESPACE LITERAL+
A4 OK CAPABILITY completed.
DEBUG IMAP: AUTH: NTLM
DEBUG IMAP: AUTH: GSSAPI
DEBUG IMAP: AUTH: PLAIN
A5 LIST "" myfolder
* LIST (\HasNoChildren) "/" myfolder
A5 OK LIST completed.
DEBUG IMAP: connection available -- size: 1
A6 SELECT myfolder
* 1 EXISTS
* 0 RECENT
* FLAGS (\Seen \Answered \Flagged \Deleted \Draft $MDNSent)
* OK [PERMANENTFLAGS (\Seen \Answered \Flagged \Deleted \Draft $MDNSent)] Permanent flags
* OK [UIDVALIDITY 39147] UIDVALIDITY value
* OK [UIDNEXT 141] The next unique identifier value
A6 OK [READ-WRITE] SELECT completed.
A7 EXPUNGE
* 1 EXISTS
A7 OK EXPUNGE completed.
A8 FETCH 1 (FLAGS)
* 1 FETCH (FLAGS (\Seen))
A8 OK FETCH completed.
A9 FETCH 1 (ENVELOPE INTERNALDATE RFC822.SIZE)
* 1 FETCH (ENVELOPE ("Fri, 9 Aug 2013 14:17:14 +0200" "subject" ........ INTERNALDATE "09-Aug-2013 14:17:14 +0200" RFC822.SIZE 2317220)
A9 OK FETCH completed.
A10 FETCH 1 (BODY[]<0.102400>)
* 1 FETCH (BODY[]<0> {102400}The MimeMessage constructor you're using needs to copy the entire message from the server. It uses the Message.writeTo method, writing the message content into a pipe that is read by the constructor. It was a bug in previous releases that the writeTo method wasn't fetching the data in chunks, as it does in other cases. Setting partialfetch=false will revert to the old behavior for writeTo, and for all other accesses of the message content.
It looks like both 1.4.4 and 1.5.1 are fetching the message content using a single FETCH command when you set partialfetch=false, according to your message. If they're both sending the same commands to the server, I can't explain why it would be fast in one case and slow in the other.
I did similar testing using my Exchange 2010 server. Fetching a similar size message took less than 20 seconds. Setting partialfetch=false brought it down to less than 5 seconds. Something's clearly wrong in your environment, but I don't know what.
Finally, I'm sure you understand that you only need to use that MimeMessage copy constructor in special circumstances, and you should probably avoid using it unless it's absolutely necessary. Even in the best case it's wasting time and memory. -
Universe.applyOverload Method Run non-linearly slower and slower
Universe.applyOverload Method Run non-linearly slower and slower, that is, for the 10th user and restriction we add, this method runs 1 second, however, for the 80th user and restriction we add, this method runs 8 seconds.
Customers think it's a bug for our BOE Java SDK method, could I know why this method is non-linear, and any way to improve its running speed.
The following is the code, for interator 80 times.
while (iter.hasNext())
user = (rpt_users_t) iter.next();
IOverload overload = (IOverload) newObjs.add("Overload");
overload.setTitle(user.getBoezh().trim());
overload.setUniverse(universe.getID());
overload.setConnection(connectionID);
overload.getRestrictedRows().clear();
overload.getRestrictedRows().add("HZB0101_T","HZB0101_T.BRANCH_COMPANY_CODE='3090100' AND HZB0101_T.DEPARTMENT_CODE='"+ user.getBmdm() + "'");
overload.getRestrictedRows().add("BM_T","BM_T.DEPARTMENT_GROUP_CODE='" + user.getBmzdm().trim()+ "'");
infoStore.commit(newObjs);
// Commit to User
IInfoObject everyone = (IInfoObject) infoStore.query(
"Select TOP 1 SI_ID " + " From CI_SYSTEMOBJECTS "+ " Where SI_KIND='User' " + " And SI_NAME='"+ user.getBoezh().trim() + "'").get(0);
int everyoneID = everyone.getID();
universe.applyOverload(overload, everyoneID, true);
//infoStore.commit(objs);
System.out.println(user.getBoezh() + " loading...");When invoking applyOverload multiple times, it's O(N^2) if you're granting rights to a User or UserGroup.
When granting, the applyOverload method retrieves all Overloads applied to the Universe and walks across each one, granting the identified ones and removing rights from others.
Sincerely,
Ted Ueda -
Collective and ind indicator in mrp4 view
Hi, we are using strategy 20 (mto). For header material collective and individual indicator is not maintained. For item it is maintained as 1 (ind). We created one sales order and run md02 for header material. In this case mrp is considering the unrestricted stock and accordingly no pru2019s were generated. We changed to 2 for item material, again it considers stock qty in mrp run so no pru2019s generated. as I understand if ind is 1 for item, it should not consider the existing stock as it is not sales order specific. Please advise why system is considering the existing stock if 1 is set for item material.
Hi,
The ind/Coll indicator is used to dispaly the requirements for a particular component in the following manner.
Individual requirements
Requirement quantities of the dependent material are stated individually for each header materials reqmnt.
Collective requirements
Requirement quantities of the dependent material are grouped together for all the materials.
Ind/Coll
All the MTO req are shown separately and MTO are grouped together
Hope this clarifdies your doubt. This indicator will not control whether the unrestricted stock is to be considered or not.
Regards
Ramana
Maybe you are looking for
-
Hey, Experts While doing account assignment PO I am facing error as the G/L account field selection is stored in table T004F; that for account assignment is stored in Table T162K is not same If I compare into these tables by the T-Codes OBD4 & OME9,
-
What is Bootstrapping in LYNC ? how does it work?
could anyone explain..whats bootstrapping in LYNC? how does that work? what is it used for.??
-
Removing folder from Finder Favorites??
I created a folder a while back called 'pics' to put in photos that I was editing. I put a shortcut to the folder in the favorites list in Finder. I recently reorganised my folders and decided that I no longer required the folder. I deleted it fro
-
Save for Web - The operation could not be completed. Access is denied.
Hello everyone! I was hoping it wouldn't have to come to this, but unfortunately, I really need help with my Adobe Photoshop. I run on Windows 32x and my program is Photoshop CS6 (Portable and its 64 bit), which I've had for about a year. Yesterday,
-
Hey, I just got my iMac G5 back from the shop, for the dreaded midplane replacement, and now I can't start up FCE at all. The serial number for the iMac has changed, would that cause the crashes? I did repair permissions, and moved the preference fil