DrawLine(...) method generates too many in[] instances
My appplication performs very intensive painting (some kind of real time charts). It often invokes drawLine(...) method of Graphics class. Under profiler I have found out that it instantiate too many int[] (about 300000 every few seconds). Though garbage collector cleans these auxilary instances well, I don't like when it runs every time. How can I write my own drawLine method, which doesn't generate so many auxilary instances? By the way I need to draw only vertical or horizontal lines.
Are you talking about drawing a grid?
If so, 300000 sounds / few seconds sounds like there
is something wrong.
Suppose your screen is 1000x1000 and you are drawing
lines every 10 pixels apart. That is 100 lines in
both directions for 200 lines.
200 lines and 4 ints per call is 800. Assume 100000 /
second, then that means you are calling repaint 125
times / second?
I have about 40 - 50 JFrames, each of them contains panel, which performs painting. Repaint invokes 10-50 times every second for each JFrame. Each repaint draws 1200 lines every time it is invoked. And int[] instances are created inside Swing version of drawLine(...) method, at any case Borland Optimizeit 5.0 says so.
If anything, I would make a temporary line object:
private Line2D line = new Line2D.Double();
public void paint(Graphics _g) {
super.paint(_g);
Graphics2D g = (Graphics2D) _g;
for each horizontal and vertical line:
line.setLine(x1, y1, x2, y2);
g.draw(line);
I've tried your code. Thanks a lot, but it generates too many int[] instances too:(.
>
Also, make sure you are not drawing lines outside of
the area that needs to be repainted. That is, only
draw a line if it would intersect the viewable
rectangle of the panel, or even better if it
intersects the getClipBounds() of the graphics
object.
Similar Messages
-
LGWR generated too many logs.
Hi. I need your help.
My LGWR processor generated too many logs in trace file. Like..
WAIT #0: nam='rdbms ipc message' ela= 295 timeout=85 p2=0 p3=0 obj#=-1 tim=5887168973687
WAIT #0: nam='rdbms ipc message' ela= 490 timeout=85 p2=0 p3=0 obj#=-1 tim=5887168974217
WAIT #0: nam='log file parallel write' ela= 125 files=2 blocks=4 requests=2 obj#=-1 tim=5887168974443
WAIT #0: nam='rdbms ipc message' ela= 268 timeout=85 p2=0 p3=0 obj#=-1 tim=5887168974750
WAIT #0: nam='rdbms ipc message' ela= 394 timeout=84 p2=0 p3=0 obj#=-1 tim=5887168975184
WAIT #0: nam='log file parallel write' ela= 102 files=2 blocks=4 requests=2 obj#=-1 tim=5887168975384
WAIT #0: nam='rdbms ipc message' ela= 362 timeout=84 p2=0 p3=0 obj#=-1 tim=5887168975785
WAIT #0: nam='rdbms ipc message' ela= 464 timeout=84 p2=0 p3=0 obj#=-1 tim=5887168976291
WAIT #0: nam='log file parallel write' ela= 137 files=2 blocks=4 requests=2 obj#=-1 tim=5887168976519
WAIT #0: nam='rdbms ipc message' ela= 273 timeout=84 p2=0 p3=0 obj#=-1 tim=5887168976830
and trace file size is 30 GB. ;(
I checked average wait time 'log file parallel write' and it is '0'.
What is problem..? Somebody know about this issue??
Please help me.
dbms version : 10.2.0.4.0
OS : SUN sparc
글 수정: 82Star82Star wrote:
There is no errors in alertlog.
But LGWR is still generating that messages in trace file. (About 6 lines per sec)
Is it normal?No, it's not normal. Is lgwr the only process doing this ? If so it looks as if someone may have used oradebug or dbms_monitor to enable extend tracing on the log writer process.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: <b><em>Oracle Core</em></b> -
Create procedure is generating too many archive logs
Hi
The following procedure was run on one of our databases and it hung since there were too many archive logs being generated.
What would be the answer? The db must remain in archivelog mode.
I understand the nologging concept, but as I know this applies to creating tables, views, indexes and tablespaces. This script is creating procedure.
CREATE OR REPLACE PROCEDURE APPS.Dfc_Payroll_Dw_Prc(Errbuf OUT VARCHAR2, Retcode OUT NUMBER
,P_GRE NUMBER
,P_SDATE VARCHAR2
,P_EDATE VARCHAR2
,P_ssn VARCHAR2
) IS
CURSOR MainCsr IS
SELECT DISTINCT
PPF.NATIONAL_IDENTIFIER SSN
,ppf.full_name FULL_NAME
,ppa.effective_date Pay_date
,ppa.DATE_EARNED period_end
,pet.ELEMENT_NAME
,SUM(TO_NUMBER(prv.result_value)) VALOR
,PET.ELEMENT_INFORMATION_CATEGORY
,PET.CLASSIFICATION_ID
,PET.ELEMENT_INFORMATION1
,pet.ELEMENT_TYPE_ID
,paa.tax_unit_id
,PAf.ASSIGNMENT_ID ASSG_ID
,paf.ORGANIZATION_ID
FROM
pay_element_classifications pec
, pay_element_types_f pet
, pay_input_values_f piv
, pay_run_result_values prv
, pay_run_results prr
, pay_assignment_actions paa
, pay_payroll_actions ppa
, APPS.pay_all_payrolls_f pap
,Per_Assignments_f paf
,per_people_f ppf
WHERE
ppa.effective_date BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
AND ppa.payroll_id = pap.payroll_id
AND paa.tax_unit_id = NVL(p_GRE, paa.tax_unit_id)
AND ppa.payroll_action_id = paa.payroll_action_id
AND paa.action_status = 'C'
AND ppa.action_type IN ('Q', 'R', 'V', 'B', 'I')
AND ppa.action_status = 'C'
--AND PEC.CLASSIFICATION_NAME IN ('Earnings','Alien/Expat Earnings','Supplemental Earnings','Imputed Earnings','Non-payroll Payments')
AND paa.assignment_action_id = prr.assignment_action_id
AND prr.run_result_id = prv.run_result_id
AND prv.input_value_id = piv.input_value_id
AND piv.name = 'Pay Value'
AND piv.element_type_id = pet.element_type_id
AND pet.element_type_id = prr.element_type_id
AND pet.classification_id = pec.classification_id
AND pec.non_payments_flag = 'N'
AND prv.result_value <> '0'
--AND( PET.ELEMENT_INFORMATION_CATEGORY LIKE '%EARNINGS'
-- OR PET.element_type_id IN (1425, 1428, 1438, 1441, 1444, 1443) )
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PET.EFFECTIVE_START_DATE AND PET.EFFECTIVE_END_DATE
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PIV.EFFECTIVE_START_DATE AND PIV.EFFECTIVE_END_DATE --dcc
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN Pap.EFFECTIVE_START_DATE AND Pap.EFFECTIVE_END_DATE --dcc
AND paf.ASSIGNMENT_ID = paa.ASSIGNMENT_ID
AND ppf.NATIONAL_IDENTIFIER = NVL(p_ssn, ppf.NATIONAL_IDENTIFIER)
------------------------------------------------------------------TO get emp.
AND ppf.person_id = paf.person_id
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN ppf.EFFECTIVE_START_DATE AND ppf.EFFECTIVE_END_DATE
------------------------------------------------------------------TO get emp. ASSIGNMENT
--AND paf.assignment_status_type_id NOT IN (7,3)
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN paf.effective_start_date AND paf.effective_end_date
GROUP BY PPF.NATIONAL_IDENTIFIER
,ppf.full_name
,ppa.effective_date
,ppa.DATE_EARNED
,pet.ELEMENT_NAME
,PET.ELEMENT_INFORMATION_CATEGORY
,PET.CLASSIFICATION_ID
,PET.ELEMENT_INFORMATION1
,pet.ELEMENT_TYPE_ID
,paa.tax_unit_id
,PAF.ASSIGNMENT_ID
,paf.ORGANIZATION_ID
BEGIN
DELETE cust.DFC_PAYROLL_DW
WHERE PAY_DATE BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
AND tax_unit_id = NVL(p_GRE, tax_unit_id)
AND ssn = NVL(p_ssn, ssn)
COMMIT;
FOR V_REC IN MainCsr LOOP
INSERT INTO cust.DFC_PAYROLL_DW(SSN, FULL_NAME, PAY_DATE, PERIOD_END, ELEMENT_NAME, ELEMENT_INFORMATION_CATEGORY, CLASSIFICATION_ID, ELEMENT_INFORMATION1, VALOR, TAX_UNIT_ID, ASSG_ID,ELEMENT_TYPE_ID,ORGANIZATION_ID)
VALUES(V_REC.SSN,V_REC.FULL_NAME,v_rec.PAY_DATE,V_REC.PERIOD_END,V_REC.ELEMENT_NAME,V_REC.ELEMENT_INFORMATION_CATEGORY, V_REC.CLASSIFICATION_ID, V_REC.ELEMENT_INFORMATION1, V_REC.VALOR,V_REC.TAX_UNIT_ID,V_REC.ASSG_ID, v_rec.ELEMENT_TYPE_ID, v_rec.ORGANIZATION_ID);
COMMIT;
END LOOP;
END ;
So, how could I assist our developer with this, so that she can run it again without it generating a ton of logs ? ?
Thanks
Oracle 9.2.0.5
AIX 5.2The amount of redo generated is a direct function of how much data is changing. If you insert 'x' number of rows, you are going to generate 'y' mbytes of redo. If your procedure is destined to insert 1000 rows, then it is destined to create a certain amount of redo. Period.
I would question the <i>performance</i> of the procedure shown ... using a cursor loop with a commit after every row is going to be a slug on performance but that doesn't change the fact 'x' inserts will always generate 'y' redo. -
HI ALL,
i have a BPEL process which consumes JMS message from ESB_monitor topic. so when there is an error or any change in the ESB, there are more number of BPEL instances being created for this BPEL process.
how can i reduce these many instances. has anyone encountered it and found a solution. please help.
Regards,
TerryHi Pavan,
Thanks for the quick reply.
what i'm doing at present is, i want the ESB instances to be present in the BPEL console using ESB APIs. for that i'm getting the flowid and other inputs from this topic.
my scenario is "i have created a ESB with DB adapter which stores a single character. when the value exceeds the limit it shows an error with retryable option since async routing is used." i'm trying to get this retryable instance to the BPEL console using ESB APIs and submit it from the BPEL console. ESB_ERROR topic is giving out only the payload.
plz give your thoughts. is there any other way to do this?
Regards,
Terry -
Issue idoc INVOIC generated too many times
Hello everyone,
i have an issue when generating idoc each time a logistic invoice (MIRO) is created.
A message associated to the idoc INVOIC02 has been customized first in order to generate an idoc when an invoice is saved.
The problem is that each time i saved an invoice, two or three idocs INVOIC02 are created instead of one ...
Do you know what is the problem and how to solve it?
Regards,
GuillaumeWell,
i finally found a solution by myself for those who are interested:
Go to transaction VOFM -> requirements -> output control
Create a routine for the application MR
Create a condition on the message key which ended by 000000 (IF NOT MSG_OBJKY is INITIAL AND MSG_OBJKY+18(6) EQ 000000) -> this mean you only want the header messages for the Invoice documents,
For further information do not hesitate to ask me.
Regards -
Hi
We've been struggling with this issue for a while and were wondering
if anyone has had similar exp's. reading this group and other weblogic
ones I see that entity bean instance counts have been(sic) causing
problems however we are having similar problems with stateless session
beans.
Here is our pool tag
<pool>
<max-beans-in-free-pool>2</max-beans-in-free-pool>
<initial-beans-in-free-pool>1</initial-beans-in-free-pool>
</pool>
Now, when the server starts JProbe is reporting anything up to 30
instances of EJB implementation Objects on the heap when we expect to
see at most 2.
This situation arises when the profiler invokes weblogic with no heap
switches.
if we use -ms64m -mx64m then we get even more instances on the heap.
If we set the switches to -ms128m -mx256m then the counts are up in
the hundreds and JProbe/weblogic takes as long as 30 minutes to start
up.
We are running the software on Win2000 with an 800Mhz intel processor
running in 512 MB RAM.
We have tried to get this working on Solaris8 but the server fell over
after 2.5 hours having consumed 600 MB of virtual memory.
Has any one suffered the same problem.
Also are we alone in experiencing these huge delays when starting the
server via the profiler.
Oh yes, if we start the server 'standalone' it comes up in about 1
minute.
Any help greatly appreciated
Cheers
Duncan L strangDid you take a thread dump to see what the server was doing? I would start
there ... hopefully it is a bug in your code (those are easy to get a fix
for ;-)
Peace,
Cameron Purdy
Tangosol Inc.
Tangosol Coherence: Clustered Coherent Cache for J2EE
Information at http://www.tangosol.com/
"Duncan L" <[email protected]> wrote in message
news:[email protected]..
Hi
We've been struggling with this issue for a while and were wondering
if anyone has had similar exp's. reading this group and other weblogic
ones I see that entity bean instance counts have been(sic) causing
problems however we are having similar problems with stateless session
beans.
Here is our pool tag
<pool>
<max-beans-in-free-pool>2</max-beans-in-free-pool>
<initial-beans-in-free-pool>1</initial-beans-in-free-pool>
</pool>
Now, when the server starts JProbe is reporting anything up to 30
instances of EJB implementation Objects on the heap when we expect to
see at most 2.
This situation arises when the profiler invokes weblogic with no heap
switches.
if we use -ms64m -mx64m then we get even more instances on the heap.
If we set the switches to -ms128m -mx256m then the counts are up in
the hundreds and JProbe/weblogic takes as long as 30 minutes to start
up.
We are running the software on Win2000 with an 800Mhz intel processor
running in 512 MB RAM.
We have tried to get this working on Solaris8 but the server fell over
after 2.5 hours having consumed 600 MB of virtual memory.
Has any one suffered the same problem.
Also are we alone in experiencing these huge delays when starting the
server via the profiler.
Oh yes, if we start the server 'standalone' it comes up in about 1
minute.
Any help greatly appreciated
Cheers
Duncan L strang -
OEM Grid generating too many core files
Hi all,
Database: Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
OEM (OMS and Agent): 10.2.0.5
OS: Solaris 10 (SPARC 64bit)
I have a weird problem concerning the agent, each an every time I start the agent, the cdump directory will be filled with hundreds of core files (66 MB each), thus feeling up the whole filesystem where the cdump directory resides. I'm not sure why this is happening. Is this a bug? Has anyone experienced this before?
Thanks in advance for your help.
Regards,
Durbanite - South AfricaHi again,
This is the content of the alert log:
bash-3.00$ tail alert_clpr1.log
ORA-07445: exception encountered: core dump [00000001002258A4] [SIGBUS] [Invalid address alignment] [0x60708090A0B0002] [] []
Fri Jul 17 10:01:11 2009
Errors in file /udd001/app/oracle/admin/clpr/udump/clpr1_ora_27740.trc:
ORA-07445: exception encountered: core dump [00000001002258A4] [SIGBUS] [Invalid address alignment] [0x60708090A0B0002] [] []
bash-3.00$ tail /udd001/app/oracle/admin/clpr/udump/clpr1_ora_27839.trc
FD6C6EF9:00005D6A 265 0 10280 1 0x0000000000000109
F3FED575:00005992 266 0 10280 1 0x000000000000010A
FD6E1007:00005D6B 266 0 10280 1 0x000000000000010A
F40260C6:00005994 267 0 10280 1 0x000000000000010B
F40735E8:00005995 268 0 10280 1 0x000000000000010C
F40C0992:00005997 269 0 10280 1 0x000000000000010D
F40F9C50:00005999 270 0 10280 1 0x000000000000010E
KSTDUMP: End of in-memory trace dump
ssexhd: crashing the process...
Shadow_Core_Dump = PARTIAL
I think I might need to contact Oracle Supprt for this one, I'll start by using the ORA-07445 tool on metalink.
Thanks and regards,
Durbanited -
Too many lines for CO-PA assessment postings in KSB1
Hi Experts,
I created an assessment cycle in KEU1, with one segment, using an PA transfer structure (several cost elements to several value fields), and I´m using one single secondary cost element (category 42) to credit the cost centers.
When I execute the assessment via KEU5, the credits in the cost centers for the cost element category 42 generates too many lines (for example, I had 3 postings to 3 primary cost elements and in KSB1 I have 9 lines of "credits" - operation KSPA.):
Primary Cost Elements:
4110201004 ==> 3.000,00
4110209009 ==> 2.000,00
4110209013 ==> 1.000,00
Secondary cost element (after settlement through KEU5):
6519999999 ==> 134,36-
198,92-
190,49-
291,54-
548,27-
311,88-
2.659,89-
1.268,02-
396,63-
Do you know how can I summarize this postings to have only 3 (ore one) line for the secondary cost element? I cannot identify what was the criteria used by SAP to generate 9 lines.
Kind Regards
Mayumithe number of line item crediting the cost centre, shall be the same of the line item required in the COPA posting. So the more receiver you have in COPA , the more line item are created.
Paolo -
Too Many Parameters Errors. Toplink Generated Class error 300 columns table
Hi Folks,
I have a table which has around 300 columns, once I generate JPA entity using Entities from the Table wizard in Jdeveloper, it gives me an errror that too many parameters in the Method, that is actualy in the generated constructor.
In case I delete half of the arguments in the constructor, it doest gives me error, as it doesnt exceeed the limit, would that be fine ? or Toplink internally need to call that constructor for the instrumenting purposes?
Other option is to go by a default constructure, that I recommend personallly, but want to make sure it will work.
Any comments, or anyone came into this problem before
A.B
Edited by: AAB on Sep 8, 2009 7:32 AMHello,
TopLink will only use the no argument constructor when building the object, then populate attributes using either attribute or property access. So the generated constructor taking arguments is not used at all by TopLinkl.
Best Regards,
Chris -
[database design] the "too many" number of users of one instance!!
Good morning everyone!!
I have a question about the number of users of one db
instance, "possible problems" caused by too many users,
and "pros and cons" of this case.
Now, the company I work for is considering centralizing
150 db servers of 150 sites. 150 chained-stores have
their own db instance.
Here is the structure of db instance after the
centralization.
1. Centralized instance has each user per site(150
sites), for example, site001, site002, …site00n, …
site150
2. Each user has its own 250 tables, thus the db
instance become to have 150 users and 150*250 tables.
3. Each user has almost the same table schema, not
exactly. There are a little bit of differences among
the business rules of each sites.
4. the version of centralized db instance is "oracle9i".
Theoretically, it seems like there is no problem, but I
have not experienced the db instance like this, which
has "too many" users of 150 users.
In terms of every aspect such as "performance/tuning",
"system management", or "something else", I would like
to hear what could be "the pros and cons" of this
structure.
Assuming that there are possible critical problems caused
by too many users, what could be the alternative choice?
I will be waiting to hear your experience. Every response
will be appreciated, even if it is not about what I asked.
Thanks for your interest in advance.
Have a nice day.
Ho from JapanHave a look to following AskTom threads:
http://asktom.oracle.com/pls/ask/f?p=4950:8:4685256847630125933::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:31263903816550
http://asktom.oracle.com/pls/ask/f?p=4950:8:4685256847630125933::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:63447102315094 -
Creat too many instance when writing an oracle database
Hi
I want to write Oracle database with database toolkits;
but..i found there are too many instance or called session(exceed the upper limit) created when the program running . that makes the Orace server stop responding
Is there anything i need to modify?
Attachments:
Oracle DB.vi 36 KBFrankly I have never seen this problem, but then I make a point of never using the database connectivity toolkit. Assuming a Windows platform, Windows implements something called connection pooling so when you "close" a connection the OS inserts it into a pool so If you ask for another connection it doesn't actually open a new one it just gives you back a reference to one of the ones you "closed" before.
What OS are you running on?
What version of Oracle?
Is the instance of Oracle local or on another computer?
How are you communicating with Oracle (odbc, oledb, etc)?
Mike...
Certified Professional Instructor
Certified LabVIEW Architect
LabVIEW Champion
"... after all, He's not a tame lion..."
Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps -
We have about five devices in our family which have been deactivated. It has been so long since I registered them -years- and I don't recall how I did it. For instance, my daughter's IPod from when she was young doesn't work and neither does her IPod Nano. She can't get any of her music. My Shuffle doesn't work. I am embarrassed that I feel like having Apple products is too complicated for me. I can barely get through all the levels of logging in, let alone how to make the devices work again. Somewhere along the way I seem to remember getting a message that we had too many devices on our account. Which account? I have given up on this so many times and now it is years later and I am trying again. How can I get some meaningful customer service to use the products and music I bought and paid for?
I think in previous incarnations, people would have jumped all over you for your approach, but the Lion operating has generated decidely mixed or even negative reviews from users. So no one seems to disagree with you. I think many folks are simply staying with 10.6 and waiting to see if OS 10.8 is better than 10.7. I've mostly avoided upgrading OS's, or when I do I jump over one version completely because I've waited long enough that I'm two versions behind.
Here are the things that will/would motivate me to update the OS:
* incompatibility with MS-Office (I'm not a MS-Office fan but I do need to use it)
* incompatibility with flash or other tools needed to view modern web content (this might be a while as I am still using several different Macs, the oldest being a 2005 PPC iMac with 10.5.8 and an fairly old/outdated flash, but it seems to still do ok with most web content). Our 2008 Intel Core 2 Duo iMac runs 10.6.8 and can do just about anything we need -- like you, I decided not to go to Lion because it didn't look like much of an improvement.
* incompatibility with various commercial software, such as TurboTax, or with web sites I use to manage financial accounts and such things
We got my daughter a new Macbook Air just before she started college, and Lion plus a new Macbook Air model came out a month later. I am glad now about it actually, I think some of the Lion "issues" would have annoyed her greatly (especially the odd dropping of wireless access points under Lion, everyone is on wireless at colleges now). I'm hoping that the 10.8 OS will be better than Lion. We had (probably still have) a certificate for a free Lion upgrade but we decided not to use it. -
I am having a situation with SSRS 2012 (SP-integrated) report rendered on SP 2013 PerformancePoint Dashboard using linked PerformancePoint (PP) filters.
The report works fine as long as too many PP filter items are not selected at the same time. When gradually selecting more items from the filter, the report updates itself until more than a sepecific numer of filter items are selected - the report simply
does not update itself anymore. This "specific number of filter items", when hit, generates the following error in ULS:
An exception occurred while rendering a Web control. The following diagnostic information might help to determine the cause of this problem: System.UriFormatException: Invalid URI: The hostname could not be parsed.
at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind)
at System.UriBuilder..ctor(String uri)
at Microsoft.PerformancePoint.Scorecards.ServerRendering.ReportViewControl.ReportUrl(SqlReportViewData sqlReportViewData)
at Microsoft.PerformancePoint.Scorecards.ServerRendering.ReportViewControl.RenderSqlReport(TextWriter writer, ReportView sqlReportView)
at Microsoft.PerformancePoint.Scorecards.ServerRendering.ReportViewControl.RenderReportViewControl(HtmlTextWriter writer, ReportView rv) PerformancePoint Services error code 20700.
I already know that the cause of the issue is in the length of the query (perhapse RDL or MDX) that the browser is supposed to pass on to the instance of SSAS.
Some people had suggested a workaround that was suitable for older versions or non-integrated SSRS (see here: http://social.msdn.microsoft.com/Forums/sqlserver/en-US/cb6ede72-6ed1-4379-9d3c-847c11b75b32/report-manager-operation-cannot-run-due-to-current-state-of-the-object).
Knowing this, have already done the changes suggested (adding the lines suggested to SP's web.config for Reporting and the web.config of the site on which report is rendred) at no avail, just to make sure.
I have rednered the same report on the same dashboard using SSRS filters and there is no problem; it just works fine. This has to be a bug in PP that is causing this.
Has anyone had the same problem with SSRS 2012 (SP-integrated) report rendered on SP 2013 PP dashboard using PP filter? Any fixes or workarounds?
thnx!Hello everybody.
I confirm the issue in Service Pack 1 Release 2.
Poor workaround is to remove the repeated infromation from the member keys (in SSAS they can be really long).
The issue seems to be specific to SSRS: Excel Services works well with the same filter.
Sergey Vdovin -
Too many distributed transactions
I am programming an application with JDeveloper 10g and ADF Framework. It is a application that only queries data (no modifications). I created ViewObjects based on sql queries and an application module.
The problem is that is I execute that application for some time, I get the following error:
java.sql.SQLException: ORA-02042: too many distributed transactions
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:189)
at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:242)
at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:554)
at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java)
at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:888)
at oracle.jdbc.driver.OracleStatement.doExecuteQuery(OracleStatement.java:2346)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:2660)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:457)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:387)
at oracle.jbo.server.QueryCollection.buildResultSet(QueryCollection.java:665)
As it is a query only program, i do not need transactions. How can fix this?
Thanks in advanceHi,
this is information from metalink:
The ORA-2042 indicates that you should increase the parameter
distributed_transactions.
The ORA-2063 indicates that this must be done at the remote
database.
Explanation
If the distributed transaction table is full on either side of
the database link you get the error ORA-2042:
ORA-02042: "too many distributed transactions"
Cause: the distributed transaction table is full,
because too many distributed transactions are active.
Action: increase the INIT.ORA "distributed_transactions" or
run fewer transactions.
If you are sure you don't have too many concurrent
distributed transactions, this indicates an internal
error and support should be notified.
Instance shutdown/restart would be a workaround.
When the error is generated at the remote database it is
accompanied with an ORA-2063. In this case the parameter
distributed_transactions must be increased at the remote
database.
If there is no ORA-2063 the parameter distributed_transactions
must be increased at the local database. -
ORA-02042: too many distributed transactions
1. I have been working on a portal application for several weeks.
2. Portal is installed in 1 instance and my data is in another instance.
3. I've had no ora-02042 problems in the devt environment set up that way.
4. I've recently migrated the application/pages to a test environment set up that way.
5. I've been working in the test environment for several days with no problems.
6. For some portlets on some pages I'm now getting:
<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>
Failed to parse query
Error:ORA-02042: too many distributed transactions
Error:ORA-02042: too many distributed transactions
ORA-02063: preceding line from
LINK_TO_TEST (WWV-11230) Failed to parse as PACID_SCHEMA -
select user_action.userid, action.name,
user_action.created_date,
user_action.created_by, action.action_id,
'del' del_link from user_action , action
where user_action.action_id =
action.action_id and user_action.userid
LIKE UPPER(:userid) order by USERID
ASC, NAME ASC, CREATED_DATE
ASC (WWV-08300)
<HR></BLOCKQUOTE>
7. I cannot find anything about this error in the db log files for either instance.
8. I've increased distributed transactions to 200 in the portal db and bounced
it. Still get the error.
9. No records in dba_2pc_pending or dba_2pc_neighbors in the portal instance.
10. I get the error in various reports and form LOVs at different times. Pages with a lot of portlets seem to be more prone to the error.
Here is a typical LOV error:
<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>
COMBOBOX LOV ERROR:
LOV: "ASIMM_1022.TITLE_LOV"
Parse Message: Parse as pre-set global: "PACID_SCHEMA".
Find Message: LOV is of type DYNAMIC (LOV based on SQL query).
Query: "select code display_column, code return_column from codes where table_id = 'OFFICER_TITLE' order by code"
wwv_parse.parse_as_user: ORA-02042: too many distributed transactions ORA-02063: preceding line from LINK_TO_TEST wwv_parse.parse_as_user: Failed to parse as PACID_SCHEMA - select code display_column, code return_column from codes where table_id = 'OFFICER_TITLE' order by code wwv_security.check_comp_privilege: Insufficient privileges. wwpre_utl.get_path_id: The preference path does not exist: ORACLE.WEBVIEW.PARAMETERS.217_USER_INPUT_ASIMM_5423428
<HR></BLOCKQUOTE>
Why are these select statements being interpreted as distributed transactions? Note:1032658.6 suggests I "USE SET TRANSACTION READ ONLY". Is this necessary? If so how?
What puzzles me is that this set up has been working fine for several days. I don't know of any changes to my environment apart from me increasing distributed transactions.
nullHi,
this is information from metalink:
The ORA-2042 indicates that you should increase the parameter
distributed_transactions.
The ORA-2063 indicates that this must be done at the remote
database.
Explanation
If the distributed transaction table is full on either side of
the database link you get the error ORA-2042:
ORA-02042: "too many distributed transactions"
Cause: the distributed transaction table is full,
because too many distributed transactions are active.
Action: increase the INIT.ORA "distributed_transactions" or
run fewer transactions.
If you are sure you don't have too many concurrent
distributed transactions, this indicates an internal
error and support should be notified.
Instance shutdown/restart would be a workaround.
When the error is generated at the remote database it is
accompanied with an ORA-2063. In this case the parameter
distributed_transactions must be increased at the remote
database.
If there is no ORA-2063 the parameter distributed_transactions
must be increased at the local database.
Maybe you are looking for
-
Help! Logic Express 8 does not work on my Mac Pro with OS 10.5.1
Hello, I hope somebody around here is able to help me. I have got a new Mac Pro with one Quad core processor since yesterday and it works fine except for Logic Express 8 (version 8.0.1). Whenever I start Logic the program crashes as soon as "Core Aud
-
Transfer of Open PO's from multiple co codes to single co code- possible ?
Hi MM Experts, I have a scenario where we want to transfer some Open PO's(Including MIGO/MIRO transactions) from company code say 5001, 5006 and 5007 to snother company code say 5999. Is it possible to do so? Has someone comemacross this kind of a re
-
Why will Adobe Acrobat not load PDF from my work website, but will load on other computers? I have to log in to a website to get our orders, and we thought it was the website at first, but have since realized it is just my computer with this issue.
-
SOCO function "Complete Item" doesn't work
Helo. Please help to solve a problem. We have SRM 5.5 We want to remove positions from SOCO through function "Complete Item". But at logon under the Ukrainian language function doesn't work. At logon under English language function in SOCO works. How
-
Hi All, i have two pages.first page is search page,secound page is create page. in search page having one lov,go,create button. once click on the create button it ll navigate to create page. here enter some data and click on the submit button data in