Very High-Level Question on Approach (PI ccBPM or CE7.1?)
All -
I have a very high-level question. I am not current on what would be the appropriate tool for my scenario.
My scenario...
Transitioning business to use new sales order system.
This will be a phased roll-out, not Big Bang.
Company ordering website needs to send orders to one of two systems (new and old).
Assume order has a flag or some other value that is an indicator to which backend application it should be sent.
Each recieving application has an order insert web service, which would be in PI 7.1's ESR.
What we "envision" is that there would be some kind of fronting web/enterpise service that would read the "flag", then pass the message to the appropriate web service (and on to the appropriate application).
But I really am not clear how to architect this, if any rules "engines" (ccBPM or in CE 7.1) would be or should be used.
You opinions are welcome...
Thank you for your time...
Hi Eric,
It would be more elegant , if you can classify this orders logically and provide two user interfaces one for each type.
But coming back to your question , if you are planning to implement the process in BPM , you could utilize the capabilities of BRM for this purpose . BRM comes with CE 7.1 and works in conjunction with BPM.
[BRM Help|https://www.sdn.sap.com/irj/sdn/nw-rules-management]
Once you want a complete roll out , you can remove the BRM decision making.Logically this would solve the purpose. ccBPM is not really positioned for this approach. It can be used when you need to interact with multiple systems using different protocols.
Caveat : The use of BPM will restrict your UI to WebDynpro Java (atleast for now). The whole process needs to be built around webservices or RFCs.
Regards
Bharathwaj
Similar Messages
-
High Level Question - Why create a tag?
We have been using a component architecture for about three years that seems to be very similar to the JSP component architecture. We have UI components (such as a listbox, entryfield, table, tree, etc) and an associated renderer. In our JSPs we directly call the renderer though to render the component (the component delegates to it's renderer).
<%= pagebean.getEntryfield().render() %>
I still can't see the great benefit of hiding that in a tag, especially considering our developers and page designers are the same people, and they are very good at Java. I wanted to simplify the developer APIs as much as possible (avoiding XML, etc). That's why we've stuck with the above API versus using tags, not to mention debugging Java is much easier. I am really excited about JSF, hoping we can move to a standard API versus our propriatary one. What do you all think?
DaveWe have been using a component architecture for about
three years that seems to be very similar to the JSP
component architecture. We have UI components (such
as a listbox, entryfield, table, tree, etc) and an
associated renderer. In our JSPs we directly call the
renderer though to render the component (the component
delegates to it's renderer).
<%= pagebean.getEntryfield().render() %>
I still can't see the great benefit of hiding that in
a tag, especially considering our developers and page
designers are the same people, and they are very good
at Java. I wanted to simplify the developer APIs as
much as possible (avoiding XML, etc). That's why
we've stuck with the above API versus using tags, not
to mention debugging Java is much easier. I am really
excited about JSF, hoping we can move to a standard
API versus our propriatary one. What do you all
think?
DaveInteresting question ... and I hope the answer is equally illuminating.
JavaServer Faces has many aims, but an important aim relevant to this question is broadening the attractiveness of the Java platform to page authors and others who are not Java developers, and would find the syntax of your scriptlet to be totally opaque and not understandable. Further, what you haven't shown is how you configure the characteristics of your component (probably <jsp:set-property> or scriptlet expressions or something?).
One of the mechanisms to improve this attractiveness will be to have high quality tools support for JavaServer Faces components -- not just the standard ones, but anyone's third party library. Picture the user who wants to use, say, a Calendar component, and your page author is using a GUI. What the user wants to be able to do is drag a Calendar off a template, drop it into their page, pop open a properties window, and configure all the detailed settings -- never seeing a line of code. The tag class (and the associated metadata in faces-config.xml) are what makes it possible for the tool to know what properties go in the property sheet.
In your environment, where the page author is also a Java developer, you still get a little benefit (configuring components through tag attributes is still more concise than <jsp:setProperty> or scriptlets). But there are many many many more page authors in the world who don't know Java, and don't want to know Java. JavaServer Faces is after those folks too.
Craig McClanahan
PS: JavaServer Faces components can also be accessed at the Java API level, so you can use scriptlets to do so in your pages if you really want to. -
High Level Thread Implementation Questions
Hi,
Before I take the plunge and program my software using threads, I have a few high-level questions.
I plan on having a simulation class that intantiates software agents, each with different parameters. There is an agent class, with constructor, methods etc. Each agent has a sequence to go through. Once completed, the iteration number is increased and the sequence is repeated. That's simple enough to do.
The question is, is it worth executing each agent on a different thread?
If there is around 500 - 1000 lines of code (crude measurement, I know) how many can I expect to thread efficiently?
One parameter allows an agent to execute n cycles for each global iteration. (i.e. in one iteration, agent A runs once, agent B runs 5 times). Could this be a problem? Should this be controlled outside the agent, or inside it?
Can I write the code without having to worry about threading, or do I have to design the agent code with threading in mind?
Will they really run in parallel? It is important that there is no bias to the execution order. I can solve this messily without using threads by randomising the execution order - but that is a messy work around - and why I'm looking at threads.
Can threaded objects interact easily with non threaded one when execution order is important?
Are there any other points that I should consider?
Thanks in advance - any information before I enter this unchartered territory will be truly appreciated!!I think you are better off running this all in a single thread.
Threads make no guarantee as to scheduling. Threads do not increase efficiency (unless your agents block on i/o, or sleep). Threads come with an overhead cost.
Threads don't guarantee no bias to execution order.
Threads require synchronization to ensure safe interaction between each other. This is a bit of extra work, and can be a bitch if you're not familiar with it.
Yes, threads run in parallel. If you have multiple processors then they can truly run in parallel, otherwise they run in time slices. -
Where can I find various high level examples of workflows being used
I am about to start a project with TCS 3.5 and have been participating in the Adobe webinars to help learn components and specific techniques but what I am lacking is an understanding of various workflows I can model my project after or take bits from various sources. Why start with Framemaker in this workflow versus RoboHelp or even Word? Questions like this I think come from experience with the process and I am thinking that what I am getting myself into is a chessgame with all these pieces and don't want to paint myself into a corner by traveling down one route. I have seen this graphic:
And this one:
And this one:
But they are too generic and do not contain enough information to really understand the descision making process one must go through on various projects.
Can we have a series of webinars made, all with the underlining theme of defining a working process or workflow, by having guests describe how they have or are using this suite in real life on their own projects? One that might include a graphic showing the routes taken through the suite with reasons why?
My project hopes to make a single source internal site that will tie together various 3D portable industrial coordinate metrology systems (hardware and software). It would be used as a dispersal site for help, communications between users and SME, OEM information, QA requirements, established processes, scripting snipet downloads, statistics, and training (including SOJT). Portable industrial metrology has 8 different softwares that are used and right now about 8 different instruments. These include laser trackers and radars, articulated arms, scanners, structered white and blue light to name a few. The softwares include Spatial Analyzer, Veriserf, CompIT, eMscon, AXYZ to a few there as well. I want to be able to participate and add content to an internal Sharpoint site, push content to users for stand-alone workstations, ePub, capture knowledge leaving the company through attrition, develop easy graphic rich job aid sheets, and aid in evaluations of emergent software and hardware. I would also like to leave the option open to use the finished product as a rosetta stone like translator between the software packages; doing this is the equivelent of doing this in these other software pacages for example.PDF is definately a format I want to include, to collaborate with other divisions and SME for one reason, but also for the ease in including 3D interactive target models with in it and portability. I plan on being able to provide individual PDFs that are very specific in their topics and to also use them to disperse user guides, cheat sheets or job aids... something the user may want to laminate on their own and keep with them for reference, printed out. Discussion in these sheets would be drasticly reduced to only the elements, relying heavely on bullet points or steps, usfull graphs, charts and tables... and of course illustrative images. I am thinking that these should be downloadable buttons to print on each topic section, not in a general apendix or such. They would hopefully be limited to one page, double sided 8x10.
The cheet sheet would have a simplistic flow chart of how or where this specific topic fits in the bigger picture,
The basic steps,
Illustrations, equipment, setup
Software settings for various situations in a table or chart,
Typical result graph to judge with,
Applicable QA, FAA regulation settings or concerns,
Troubleshooting table,
Topic SME contact info
On the back, a screen shot infographic of software process
The trouble here is that I have read that FM has a problem sometimes in succesfully transfering highly structured or formatted material to RoboHelp. Does this then mean that I would take it from FM straight to PDF?
Our OEM material is very high level stuff... basicly for engineers and not shop floor users... but that is not to say they don't have some good material that could be useful. Our internal content is spread out across many different divisions and continents, with various ways of saying the same thing. This leads QA to interpret the information differently depending where the systems are put to work. We also have FAA requirements that need to be addressed and reminded to the user.
Our company is starting to also see an exodus of the most knowledagble of the users through retirement. Capturing the knowledge and soft skill packages they have developed working here for 20-30 years is something I am really struggling with. I have only come up with two ideas so far:
Internal User Web based Forum
Interviews (some SMEs do not want to make the effort in transfering knowledge by participating in anything if it requires an effort they don't see of benefit to themseleves), to get video, audio or transcription records -
Very high memory usage with Yahoo Mail
After using Yahoo Mail for an hour or so my memory usage increases to a very high level.
Just now, after reading and deleting about 50 e-mails (newsletters etc.) I noticed Firefox 17 running slowly and checked the memory usage in Windows Task Manager (I am using XP) and it was 1.2 Gb. My older laptop only has 2 Gb of RAM. Yahoo Mail was the only thing open at the time.
I never notice this problem with Gmail which I mainly use. However I use Yahoo Mail for quite a few newsletters etc. that are less important and which I only check once a week or so.
I found the following bug report about 3 years old which almost exactly describes my problem.
https://bugzilla.mozilla.org/show_bug.cgi?id=506771
But this report involves a much earlier Firefox version, and at the end it seems to say that the problem was fixed. However it well describes my current issue with Firefox 17, especially the continual increase in memory while using the up/down arrow keys to scroll through Yahoo e-mails.
Is this normal to have to shut down and reopen Firefox every hour or so to clean out the memory? For some reason I only notice this when using Yahoo Mail. After using many other sites and having multiple tabs open for several hours I rarely reach that kind of memory usage. About the highest I've seen with other sites after a couple of hours is 600 Kb which is roughly when I start notice slower response times.See also:
*https://support.mozilla.org/kb/firefox-uses-too-much-memory-ram
Start Firefox in <u>[[Safe Mode|Safe Mode]]</u> to check if one of the extensions (Firefox/Tools > Add-ons > Extensions) or if hardware acceleration is causing the problem (switch to the DEFAULT theme: Firefox/Tools > Add-ons > Appearance).
*Do not click the Reset button on the Safe mode start window or otherwise make changes.
*https://support.mozilla.org/kb/Safe+Mode
*https://support.mozilla.org/kb/Troubleshooting+extensions+and+themes -
High level OBIEE diagram?
Hi, I'm wondering if anyone has a very high level OBIEE diagram, suitable for given new users a 10,000 foot view of the OBIEE system. I.e. something that shows browser talking to presentation services talking to BI Server talking to databases.
Does anyone have anything I can blatently steal...ummm I mean reutilize?
Thx,
ScottSPowell42 wrote:
Hi, I'm wondering if anyone has a very high level OBIEE diagram, suitable for given new users a 10,000 foot view of the OBIEE system. I.e. something that shows browser talking to presentation services talking to BI Server talking to databases.
Does anyone have anything I can blatently steal...ummm I mean reutilize?
Thx,
ScottHi scott,
There are ton's of images out there ( Not literally in tons but you know what I mean).
Follow this link: ( https://www.google.com/search?hl=en&sugexp=les%3B&tok=3m7SX0LbsZH2wJ71o09isg&cp=26&gs_id=3&xhr=t&q=obiee+architecture+diagram&bav=on.2,or.r_gc.r_pw.r_qf.&biw=1206&bih=647&wrapid=tljp134512990497400&um=1&ie=UTF-8&tbm=isch&source=og&sa=N&tab=wi&ei=sg0tULUC4cvRAdmPgMgC )
Feel free to re-utilize them as you wish :) -
Does anyone have a sample implementation plan that can be shared? High level?
Does anyone have a sample implementation plan that can be shared? High level?
You will probably need to inquire with a VMware consultant to get this kind of information. VMware depends on these people to make sure they keep the reputation of the software at a very high level.
They will have access to various free tools to help large and small scale deployments. Tools like VMware Health Check Script and the ESX deployment tool.
If you find this information useful, please award points for
"correct"
or "helpful".
Wes Hinshaw
www.myvmland.com -
Basic XML Publisher Question: How to access tags in the higher levels?
Hi All,
We have a basic question in XML Publisher.
We have a xml hierarchy like below:
<CD_CATALOG>
<CATALOG>
<CAT_NAME> CATALOG 1</CAT_NAME>
<CD>
<TITLE>TITLE1 </TITLE>
<ARTIST>ARTIST1 </ARTIST>
</CD>
<CD>
<TITLE> TITLE2</TITLE>
<ARTIST>ARTIST2 </ARTIST>
</CD>
</CATALOG>
<CATALOG>
<CAT_NAME> CATALOG 2</CAT_NAME>
<CD>
<TITLE>TITLE3 </TITLE>
<ARTIST>ARTIST3 </ARTIST>
</CD>
<CD>
<TITLE> TITLE4</TITLE>
<ARTIST>ARTIST4 </ARTIST>
</CD>
</CATALOG>
</CD_CATALOG>
We need to create a report like below:
CATALOG_NAME CD_TITLE CD_ARTISTCATALOG 1 TITLE1 ARTIST1
CATALOG 1 TITLE2 ARTIST2
CATALOG 2 TITLE3 ARTIST3
CATALOG 2 TITLE4 ARTIST4
So we have to loop at the level of <CD> using for-each CD. But when we are inside this loop, we cannot access the value of CAT_NAME which is at a higher level.
How can we solve this?
Right now, we are using the work-around of set_variable and get_Variable. We are setting the value of CAT_NAME inside an outer loop, and using it inside the inner loop using get_variable.
Is this the proper way to do this or are there better ways to do this? We are running into troubles when the data is inside tables.you can use
<?../CAT_NAME?>copy past to your template
<?for-each:CD?> <?../CAT_NAME?> <?TITLE?> <?ARTIST?> <?end for-each?> -
High-Level JTS/TopLink design question
I've gone through the "using JTS with TopLink" docs, and it mostly makes sense. However, I still don't understand how TopLink "knows" when I call acquireUnitOfWork() whether or not I'm participating in a distributed 2PC transaction.
Said another way:
Let's say I've got an application based on TopLink (registering appropriate JTS stuff) that exposes an API that can be accessed remotely (RMI, SOAP, whatever).
And, I've got another, separate application using a different persistence-layer technology (also supporting JTS) that also has an API.
Now, I create a business method that uses the APIs from both of these applications, and I want them to participate in a single, distributed transaction.
At a high level (source code is unnecessary), how does that work?
Would the API need to support an ability to specifiy a TransactionContext or is this all handled behind the scenes by the 2 systems registering with the Transaction Service?
If this is all handled through registration, how do these 2 systems know that these specific calls are all part of the same XA transaction?Nate,
TopLink particiaptes in JTA/JTS transactions but dows not control them. When you configure TopLink to use the JTA/JTS services of the host application server you are deferring TX control to the J2EE container. TopLink will in this case register each acquired UnitOfWork in the current active TX from the container. The container will also ensure that the JDBC connection provided to TopLink is also bound by the active TX.
In order to get 2PC you must register multiple resources into the same JTA TX. The TX processing during commit will then make the appropriate call backs to the underlying data source as well as the necessary call backs to listeners suchs as TopLink to have its SQL issued against the database.
In short: The J2EE container manages the 2PC TX and TopLink is just a participant.
Doug Clarke -
APO DP: Disaggregation to product&plant level from higher levels.
Hi.
We do demand planning on groups of products and for country/region in general, we have around 48.000 CVC's in our current setup. It works very well.
A new situation has arisen where we need to have the forecast split down to product and plant level.
As is we simply don't have the information at this level of granularity.
I don't see how we can add for instance product to our setup, we have around 20.000 products so the number of CVC's in DP would become massive if we did this.
I was thinking that perhaps something could be done by exporting the relevant key figures to a new DP setup with fewer characteristics (to keep the number of CVC's down) via some infocubes, perhaps some disaggregation could be done via some tables and the BW update rules. This still leaves the issue of how to get the figures properly disaggregated to plant and product though.
Does anyone have experiences on how to get the figures split to lower levels from DP when you're planning on a higher level?Simon,
One approach as you mentioned can be creating Z Table where in you set up disaggregation proportion from product group level to product level or product location level.
Product Group X 100 Product A@loc1 10
Product B@loc1 90
Download your planning area data into infocube C and then use BW routines to convert the data from group in infocube C to lower level refereing Z Table....into another infocube..
SAP also provides such standard functionality of spliting the aggregate Demand plan to detailed level
SNP plan..through functionality like location slit or product split.
Essential you will be using same concept in yor BW solution or you may also want to consider the
release your DP to SNP planning area its as solution of diaggregation of data to lower level.
Regards,
Manish -
Hi all,
I want answers for my two questions very urgent..please tell me
1) can we use canvas with high level API like Forms,TextFeild,List etc. 2) Then can i know CustomItem work with canvas.
Thanks in advance
Regards / sourabHi,
Is there possible to desgn the session window (like that we have in yahoo messenger) using canvas.If so means,please explain it for me.
Thanks / sourab -
RFC: Proposing a high level iteration facility based on Collections
I am requesting for comments for an experimental package I developed providing a high level facility for iteration on Java 2 Collections.
You can browse the javadoc at http://www.cacs.louisiana.edu/~cxg9789/javautils/ and the code is available for download from http://www.cacs.louisiana.edu/~cxg9789.
Basically, the package provides an interface Task that has a single method job() which is called for every element in a given collection. There are some static methods for using this kind of scheme and iterating over collections. An example would be: Iteration.loop(collection, new Task() {
public void job(Object o) {
// do something on o here
});Now you may wonder what is the use of going into this much trouble when I can just get an iterator and do the same thing? Well, creating a class that represents the whole iteration opens a number of new possibilities. You can now have methods and variables exclusive for the specific iteration and reuse it. You can even subclass it for variants. This proved very useful in my application especially when I developed the StringTask class that is available in the same package.
Nevertheless, you can see it for yourself that we've got rid of the iterator and the condition checking that appears in conventional loop constructs. For details you can look at the Iteration at http://www.cacs.louisiana.edu/~cxg9789/javautils/edu/ull/cgunay/utils/Iteration.html and StringTask at http://www.cacs.louisiana.edu/~cxg9789/javautils/edu/ull/cgunay/utils/StringTask.html
I was wondering if you Java developers would find such a scheme useful. Thanks for your interest..Heh... now I need to remember back...
No I don't, the internet comes to the rescue :-) I
was wrong, apply is the simple "function apply"
function, the list function of interest was (is)
called map.
http://www.cse.unsw.edu.au/~paull/cs1011/hof.html
This was very helpful. Now I know what you're saying. Actually this may very well be the hidden influence that lead to this system. The map operation also exists in Lisp and Scheme. I remember really liking it when I first learned about it.
My approach isn't exactly the same as neither of map, fold or filter. But I think I can create subclasses of Task which act the same way with these. I will try to do this in the near future.
Just as a comment on the design, I would have
maneuvered the list of getSomeResult() into an object
that knows how to render itself as a String. That may
well be using your Iteration feature over the internal
list, or other feature as the developer sees fit.This point is well taken. Actually my program uses this kind of approach in many places, however I wanted to give a less confusing example here.
Your iteration is a perfectly valid pattern to iterate
over a collection, and one that could be
applied in many places in my code. I'm unsure I
will migrate to use it, because it tends
towards a large number of small objects to perform its
function... which sometimes can be a simplification,
but in this case can obscure multi-threading issues...
spreading a loop over a collection across multiple
classes makes it less obvious what is happening to the
contents, or what synchronization is required, what
locks have been acquired, or what concurrent
modifications are possible.On the other hand, it might be more secure to have operations on collections in a place as an Iteration class to keep them together. Maybe you already mentioned it in your message. I can understand your concern in using the system, though.
To keep this information in one place, you could use
an anonymous inner class, but then you have lost the
reusability and succinctness of the iteration, which
are two of its largest benefits (Being a high-level
function using centralised, tested code being the
third, and probably the largest)I started using small inner classes very extensively. Maybe this can alleviate the problem, since they're not anonymous and can be reused. However, there is still a problem using (subclassing) them from outside of the class. I found a way to do this, too. It only works in special situations, though.
Assume you have an inner class:
public class Outer {
class Inner { ... }
}You can extend this inner class if you have another class extending the Outer:
class NewOuter extends Outer {
class Inner extends Outer.Inner { ... }
Thinking about it, your iteration is at a higher level
than foreach... and would benefit from using it
if were ever supported by the JVM. They are slightly
orthogonal approaches to looping abstraction, foreach
being syntactic and your pattern being heuristic.We're in tune here, I'd be interested to use the foreach operator and the Iterable interface as primitives in my system if they will ever be provided in Java. Currently, they are not offering me anything extra, since the Collection interface provides me with what I need. -
XML select query causing very high CPU usage.
Hi All,
In our Oracle 10.2.0.4 Two node RAC we are facing very high CPU usage....and all of the top CPU consuming processes are executing this below sql...also these statements are waiting for some gc wiat events as shown below.
SELECT B.PACKET_ID FROM CM_PACKET_ALT_KEY B, CM_ALT_KEY_TYPE C, TABLE(XMLSEQUENCE ( EXTRACT (:B1 , '/AlternateKeys/AlternateKey') )) T
WHERE B.ALT_KEY_TYPE_ID = C.ALT_KEY_TYPE_ID AND C.ALT_KEY_TYPE_NAME = EXTRACTVALUE (VALUE (T), '/AlternateKey/@keyType')
AND B.ALT_KEY_VALUE = EXTRACTVALUE (VALUE (T), '/AlternateKey')
AND NVL (B.CHILD_BROKER_CODE, '6209870F57C254D6E04400306E4A78B0') =
NVL (EXTRACTVALUE (VALUE (T), '/AlternateKey/@broker'), '6209870F57C254D6E04400306E4A78B0')
SQL> select sid,event,state from gv$session where state='WAITING' and event not like '%SQL*Net%';
SID EVENT STATE
66 jobq slave wait WAITING
124 gc buffer busy WAITING
143 gc buffer busy WAITING
147 db file sequential read WAITING
222 Streams AQ: qmn slave idle wait WAITING
266 gc buffer busy WAITING
280 gc buffer busy WAITING
314 gc cr request WAITING
317 gc buffer busy WAITING
392 gc buffer busy WAITING
428 gc buffer busy WAITING
471 gc buffer busy WAITING
518 Streams AQ: waiting for time management or cleanup tasks WAITING
524 Streams AQ: qmn coordinator idle wait WAITING
527 rdbms ipc message WAITING
528 rdbms ipc message WAITING
532 rdbms ipc message WAITING
537 rdbms ipc message WAITING
538 rdbms ipc message WAITING
539 rdbms ipc message WAITING
540 rdbms ipc message WAITING
541 smon timer WAITING
542 rdbms ipc message WAITING
543 rdbms ipc message WAITING
544 rdbms ipc message WAITING
545 rdbms ipc message WAITING
546 rdbms ipc message WAITING
547 gcs remote message WAITING
548 gcs remote message WAITING
549 gcs remote message WAITING
550 gcs remote message WAITING
551 ges remote message WAITING
552 rdbms ipc message WAITING
553 rdbms ipc message WAITING
554 DIAG idle wait WAITING
555 pmon timer WAITING
79 jobq slave wait WAITING
117 gc buffer busy WAITING
163 PX Deq: Execute Reply WAITING
205 db file parallel read WAITING
247 gc current request WAITING
279 jobq slave wait WAITING
319 LNS ASYNC end of log WAITING
343 jobq slave wait WAITING
348 direct path read WAITING
372 db file scattered read WAITING
475 jobq slave wait WAITING
494 gc cr request WAITING
516 Streams AQ: qmn slave idle wait WAITING
518 Streams AQ: waiting for time management or cleanup tasks WAITING
523 Streams AQ: qmn coordinator idle wait WAITING
528 rdbms ipc message WAITING
529 rdbms ipc message WAITING
530 Streams AQ: waiting for messages in the queue WAITING
532 rdbms ipc message WAITING
537 rdbms ipc message WAITING
538 rdbms ipc message WAITING
539 rdbms ipc message WAITING
540 rdbms ipc message WAITING
541 smon timer WAITING
542 rdbms ipc message WAITING
543 rdbms ipc message WAITING
544 rdbms ipc message WAITING
545 rdbms ipc message WAITING
546 rdbms ipc message WAITING
547 gcs remote message WAITING
548 gcs remote message WAITING
549 gcs remote message WAITING
550 gcs remote message WAITING
551 ges remote message WAITING
552 rdbms ipc message WAITING
553 rdbms ipc message WAITING
554 DIAG idle wait WAITING
555 pmon timer WAITINGI am not at all able to understand what this SQL is...i think its related to some XML datatype.
Also not able to generate execution plan for this sql using explain plan- getting error(ORA-00932: inconsistent datatypes: expected - got -)
Please help me in this issue...
How can i generate execution plan?
Does this type of XML based query will cause high GC wiat events and buffer busy wait events?
How can i tune this query?
How can i find that this is the only query causing High CPU usage?
Our servers are having 64 GB RAM and 16 CPU's..
OS is Solaris 5.10 with UDP as protocol for interconnect..
-YasserI found some more xml queries as shown below.
SELECT XMLELEMENT("Resource", XMLATTRIBUTES(RAWTOHEX(RMR.RESOURCE_ID) AS "resourceID", RMO.OWNER_CODE AS "ownerCode", RMR.MIME_TYPE AS "mimeType",RMR.FILE_SIZE AS "fileSize", RMR.RESOURCE_STATUS AS "status"), (SELECT XMLAGG(XMLELEMENT("ResourceLocation", XMLATTRIBUTES(RAWTOHEX(RMRP.REPOSITORY_ID) AS "repositoryID", RAWTOHEX(DIRECTORY_ID) AS "directoryID", RESOURCE_STATE AS "state", RMRO.RETRIEVAL_SEQ AS "sequence"), XMLFOREST(FULL_PATH AS "RemotePath"))ORDER BY RMRO.RETRIEVAL_SEQ) FROM RM_RESOURCE_PATH RMRP, RM_RETRIEVAL_ORDER RMRO, RM_LOCATION RML WHERE RMRP.RESOURCE_ID = RMR.RESOURCE_ID AND RMRP.REPOSITORY_ID = RMRO.REPOSITORY_ID AND RMRO.LOCATION_ID = RML.LOCATION_ID AND RML.LOCATION_CODE = :B2 ) AS "Locations") FROM RM_RESOURCE RMR, RM_OWNER RMO WHERE RMR.OWNER_ID = RMO.OWNER_ID AND RMR.RESOURCE_ID = HEXTORAW(:B1 )
SELECT XMLELEMENT ( "Resources", XMLAGG(XMLELEMENT ( "Resource", XMLATTRIBUTES (B.RESOURCE_ID AS "id"), XMLELEMENT ("ContentType", C.CONTENT_TYPE_CODE), XMLELEMENT ("TextExtractStatus", B.TEXT_EXTRACTED_STATUS), XMLELEMENT ("MimeType", B.MIME_TYPE), XMLELEMENT ("NumberPages", TO_CHAR (B.NUM_PAGES)), XMLELEMENT ("FileSize", TO_CHAR (B.FILE_SIZE)), XMLELEMENT ("Status", B.STATUS), XMLELEMENT ("ContentFormat", D.CONTENT_FORMAT_CODE), G.ALTKEY )) ) FROM CM_PACKET A, CM_RESOURCE B, CM_REF_CONTENT_TYPE C, CM_REF_CONTENT_FORMAT D, ( SELECT XMLELEMENT ( "AlternateKeys", XMLAGG(XMLELEMENT ( "AlternateKey", XMLATTRIBUTES ( H.ALT_KEY_TYPE_NAME AS "keyType", E.CHILD_BROKER_CODE AS "broker", E.VERSION AS "version" ), E.ALT_KEY_VALUE )) ) ALTKEY, E.RESOURCE_ID RES_ID FROM CM_RESOURCE_ALT_KEY E, CM_RESOURCE F, CM_ALT_KEY_TYPE H WHERE E.RESOURCE_ID = F.RESOURCE_ID(+) AND F.PACKET_ID = HEXTORAW (:B1 ) AN
D E.ALT_KEY_TYPE_ID = H.ALT_KEY_TYPE_ID GROUP BY E.RESOURCE_ID) G WHERE A.PACKET_ID = HEXTORAW (:B1
SELECT XMLELEMENT ("Tagging", XMLAGG (GROUPEDCAT)) FROM ( SELECT XMLELEMENT ( "TaggingCategory", XMLATTRIBUTES (CATEGORY1 AS "categoryType"), XMLAGG (LISTVALUES) ) GROUPEDCAT FROM (SELECT EXTRACTVALUE ( VALUE (T), '/TaggingCategory/@categoryType' ) CATEGORY1, XMLCONCAT(EXTRACT ( VALUE (T), '/TaggingCategory/TaggingValue' )) LISTVALUES FROM TABLE(XMLSEQUENCE(EXTRACT ( :B1 , '/Tagging/TaggingCategory' ))) T) GROUP BY CATEGORY1)
SELECT XMLCONCAT ( :B2 , DI_CONTENT_PKG.GET_ENUM_TAGGING_FN (:B1 ) ) FROM DUAL
SELECT XMLCONCAT (:B2 , :B1 ) FROM DUAL
SELECT * FROM EQ_RAW_TAG_ERROR A WHERE TAG_LIST_ID = :B2 AND EXTRACTVALUE (A.RAW_TAG_XML, '/TaggingValues/TaggingValue/Value' ) = :B1 AND A.STATUS = '
NR'
SELECT RAWTOHEX (S.PACKET_ID) AS PACKET_ID, PS.PACKET_STATUS_DESC, S.LAST_UPDATE AS LAST_UPDATE, S.USER_ID, S.USER_COMMENT, MAX (T.ALT_KEY_VALUE) AS ALTKEY, 'Y' AS IS_PACKET FROM EQ_PACKET S, CM_PACKET_ALT_KEY T, CM_REF_PACKET_STATUS PS WHERE S.STATUS_ID = PS.PACKET_STATUS_ID AND S.PACKET_ID = T.PACKET_ID AND NOT EXISTS (SELECT 1 FROM CM_RESOURCE RES WHERE RES.PACKET_ID = S.PACKET_ID AND EXISTS (SELECT 1 FROM CM_REF_CONTENT_FORMAT CF WHERE CF.CONTENT_FORMAT_ID = RES.CONTENT_FORMAT AND CF.CONTENT_FORMAT_CODE = 'I_FILE')) GROUP BY RAWTOHEX (S.PACKET_ID), PS.PACKET_STATUS_DESC, S.LAST_UPDATE, S.USER_ID, S.USER_COMMENT UNION SELECT RAWTOHEX (A.FATAL_ERROR_ID) AS PACKET_ID, C.PACKET_STATUS_DESC, A.OCCURRENCE_DATE AS LAST_UPDATE, '' AS USER_ID, '' AS USER_COMMENT, RAWTOHEX (A.FATAL_ERROR_ID) AS ALTKEY, 'N' AS IS_PACKET FROM EQ_FATAL_ERROR A, EQ_ERROR_MSG B, CM_REF_PACKET_STATUS C, EQ_SEVERITYD WHERE A.PACKET_ID IS NULL AND A.STATUS = 'NR' AND A.ERROR_MSG_ID = B.ERROR_MSG_ID AND B.SEVERITY_I
SELECT /*+ INDEX(e) INDEX(a) INDEX(c)*/ XMLAGG(XMLELEMENT ( "TaggingCategory", XMLATTRIBUTES ( G.TAG_CATEGORY_CODE AS "categoryType" ), XMLELEMENT ("TaggingValue", XMLATTRIBUTES (C.IS_PRIMARY AS "primary", H.ORIGIN_CODE AS "origin"), XMLAGG(XMLELEMENT ( "Value", XMLATTRIBUTES ( F.TAG_LIST_CODE AS "listType" ), E.TAG_VALUE )) ) )) FROM TABLE (CAST (:B1 AS T_TAG_MAP_HIERARCHY_TAB)) A, TABLE (CAST (:B2 AS T_ENUM_TAG_TAB)) C, REM_TAG_VALUE E, REM_TAG_LIST F, REM_TAG_CATEGORY G, CM_ORIGIN H WHERE E.TAG_VALUE_ID = C.TAG_VALUE_ID AND F.TAG_LIST_ID = E.TAG_LIST_ID AND G.TAGGING_CATEGORY_ID = F.TAGGING_CATEGORY_ID AND H.ORIGIN_ID = C.ORIGIN_ID AND C.ENUM_TAG_ID = A.MAPPED_ENUM_TAG_ID GROUP BY C.IS_PRIMARY, H.ORIGIN_CODE, G.TAG_CATEGORY_CODE START WITH A.MAPPED_ENUM_TAG_ID = HEXTORAW (:B3 ) CONNECT BY PRIOR A.MAPPED_ENUM_TAG_ID = A.ENUM_TAG_ID
SELECT /*+ INDEX(e) */ XMLAGG(XMLELEMENT ( "TaggingCategory", XMLATTRIBUTES ( G.TAG_CATEGORY_CODE AS "categoryType" ), XMLELEMENT ( "TaggingValue", XMLATTRIBUTES (C.IS_PRIMARY AS "primary", H.ORIGIN_CODE AS "origin"), XMLAGG(XMLCONCAT ( XMLELEMENT ( "Value", XMLATTRIBUTES ( F.TAG_LIST_CODE AS "listType" ), E.TAG_VALUE ), CASE WHEN LEVEL = 1 THEN :B4 ELSE NULL END )) ) )) FROM TABLE (CAST (:B1 AS T_TAG_MAP_HIERARCHY_TAB)) A, TABLE (CAST (:B2 AS T_ENUM_TAG_TAB)) C, REM_TAG_VALUE E, REM_TAG_LIST F, REM_TAG_CATEGORY G, CM_ORIGIN H WHERE E.TAG_VALUE_ID = C.TAG_VALUE_ID AND F.TAG_LIST_ID = E.TAG_LIST_ID AND G.TAGGING_CATEGORY_ID = F.TAGGING_CATEGORY_ID AND H.ORIGIN_ID = C.ORIGIN_ID AND C.ENUM_TAG_ID = A.MAPPED_ENUM_TAG_ID GROUP BY G.TAG_CATEGORY_CODE, C.IS_PRIMARY, H.ORIGIN_CODE START WITH A.MAPPED_ENUM_TAG_ID = HEXTORAW (:B3 ) CONNECT BY PRIOR A.MAPPED_ENUM_TAG_ID = A.ENUM_TAG_IDBy observing above sql queries i found some hints forcing for index usage..
I think xml schema is created already...and its progressing as you stated above. Please correct if i am wrong.
I found all these sql from AWR report and all of these are very high resource consuming queries.
And i am really sorry if i am irritating you by asking all stupid questions related to xml.
-Yasser
Edited by: YasserRACDBA on Nov 17, 2009 3:39 PM
Did syntax allignment. -
Very high log file sequential read and control file sequential read waits?
I have a 10.2.0.4 database and have 5 streams capture processes running to replicate data to another database. However I am seeing very high
log file sequential read and control file sequential read by the capture procesess. This is causing slowness in the database as the databass is wasting so much time on these wait events. From AWR report
Elapsed: 20.12 (mins)
DB Time: 67.04 (mins)
and From top 5 wait events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time 1,712 42.6
log file sequential read 99,909 683 7 17.0 System I/O
log file sync 49,702 426 9 10.6 Commit
control file sequential read262,625 384 1 9.6 System I/O
db file sequential read 41,528 378 9 9.4 User I/O
Oracle support hasn't been of much help, other than wasting my 10 days and telling me to try this and try that.
Do you have streams running in your environment, are you experiencing this wait. Have you done anything to resolve these waits..
ThanksWelcome to the forums.
There is insufficient information in what you have posted to know that your analysis of the situation is correct or anything about your Streams environment.
We don't know what you are replicating. Not size, not volume, not type of capture, not rules, etc.
We don't know the distance over which it is being replicated ... 10 ft. or 10 light years.
We don't have any AWR or ASH data to look at.
etc. etc. etc. If this is what you provided Oracle Support it is no wonder they were unable to help you.
To diagnose this problem, if one exists, requires someone on-site or with a very substantial body of data which you have not provided. The first step is to fill in the answers to all of the obvious first level questions. Then we will likely come back with a second level of questioning.
But when you do ... do not post here. Your questions are not "Database General" they are specific to Streams and there is a Streams forum specifically for them.
Thank you. -
Very high ASYNC_NETWORK_IO
Hi There; I’m an ‘Accidental DBA’ with a problem (is there any other kind?).
We seem to be getting very high ASYNC_NETWORK_IO; with a WaitCount of around 20 million hits per 24 hour period (total wait time around 9 hours in that same period).
I’ve spent a lot of time researching this wait and as I understand it ASYNC_NETWORK_IO is most often caused by the application not consuming data fast enough; so we had a programmer work through the code and resolve all locations where we are using a IQueryable
in a for next loop and converting them immediately into arrays (Linq To SQL).
This seems to have made no difference.
I would like to set up an extended event trace to find which queries are generating this particular wait, but at an average of over 200 hits per second I’m concerned about performance impact such a trace might have.
I’m looking for suggestions as to how to move forward in resolving this issue.
My first question is, am I correct in thinking that the number of these waits that we are getting is excessively high?
Assuming that is the case how can I go about tracing the offending queries without hammering the system?
If (as I suspect) it is not individual queries that are causing the issue, what else should I be looking for?
Thanks in advance
Paul.Hi chaps
Thanks for the help.
We have about 100 concurrent users and they are always complaining about the application being slow, so the application is definitely not performing well.
Using activity monitor the cpu is normally at 20-60% and there are usually no more than 0-4 waiting tasks and a few hundred batch req/sec.
The only thing that stands out is the very high ASYNC_NETWORK_IO, in terms of both wait time & wait count.
Raju, thank you for the link, I have read that blog post before in my research: the server is using only about 2.5% of the available bandwidth (1gb). Our network admin assures me that the network is set-up correctly. while it is possible that there are a
few badly written queries most of them are quite lean. Almost all of our processes use Linq to SQL, there are no bulk dataloads
You also said:
>I’ve spent a lot of time researching this wait and as I understand it ASYNC_NETWORK_IO
>is most often caused by the application not consuming data fast enough; so we had a
>programmer work through the code and resolve all locations where we are using a IQueryable
>in a for next loop and converting them immediately into arrays (Linq To SQL).
I'm not familiar with what kind of SQL that generates.
I asked about a design issue of too much data before, if you are using a low-level interface then the opposite is a question as well, even for modest amounts of data if you somehow use server-side cursors, or otherwise end up sending SQL commands for
just one row at a time, then you can get slow response and high asynch waits. I'm also not sure what you mean by having the programmer "resolve" these locations.
David also asks a good question whether there are just a couple of waits that throw off the totals. A similar but more design-oriented question is whether your app has some little widget that is always tickling SQL Server for an update, 100 users issuing
a one-line query once a second, to fill some tiny counter on the user screen, can have this same kind of effect.
Finally on the perceived app slowness *again* I would ask about design issues, I've seen apps that were very cleverly doing async, background data loads on ten hidden panels while the user gazed at their data. This was very heavily loading the system
for basically no good reason, but it wasn't SQL Server's fault.
Josh
Maybe you are looking for
-
Can I run Quickbooks Contractor for Windows on my Macbook Pro with Bootcamp ?
Can I easily run Quickbooks Contractor for Windows in my Macbook Pro and exchange QB data files with my accountant ?
-
Hi Gurus, I have requirement to add rows in report FBL1N in order to display all item per document ( like in document overview ) in the same screen. I applied note 112312 to display offset a/c (BTE with FM LINE_ITEMS_GET_GKONT) but partialy resolv
-
What is the best site for Safari release schedules
My group has the responsibility for performing browser compatibility testing. I am looking for the best site for tracking Safari release schedules, so we might be able to plan our testing cycles using the latest Safari versions for Windows and Mac.
-
Will this WiFi card work in my DV6500?
I need to replace the WiFi card. This one is cheap! http://cgi.ebay.co.uk/ws/eBayISAPI.dll?ViewItem&item=310213589474 Thanks
-
Dear Team, Our SAP ECC 6.0 is running on HP Unix . I want to take OS level backup . So that my OS ,SAP Installation, Oracle Installation ,SAP Data File every thing . I want to take backup. Right now I am using backup via DB13 . In this only SAP Data,