Best Practice: Update OWB 10.2.0.1 to 10.2.0.3 ?

Hey OWB-Guys,
I've searched the forum and metalink, but I could not find what I'm looking for. I want to update my OWB from version 10.2.0.1 to 10.2.0.3 (I'm working on WinXP Pro Sp2, so I can't use 10.2.0.4, right?).
Our system architecture will change soon, so I have to start the Control Center Service locally on my client computer in the future. I think there will be no OWB installed on DB server, if I understood our DBA correctly.
What will I have to do, to update OWB? Install patchset from Metalink on my client computer? What about the repository owner and repository user in the database?
Do you have a document / link with a guideline through the things to do?
Thanks in advance and have a nice weekend!
Steffen

Hey, nobody who upgraded from 10.2.0.1 to 10.2.0.3 ??? Please help me with some general hints or links describing that issue...
Thank you!
Steffen

Similar Messages

  • Best Practices on OWB/ODI when using Asynchronous Distributed HotLog Mode

    Hello OWB/ODI:
    I want to get some advice on best practices when implementing OWB/ODI mappings to handle Oracle Asynchronous Distributed HotLog CDC (change data capture), specifically for “updates”.
    Under Asynchronous Distributed HotLog mode, if a record is changed in a given source table, only the column that has been changed is populated in the CDC table with the old and new value, and all other columns with the exception of the keys are populated with NULL values.
    In order to process this update with an OWB or ODI mapping, I need to compare the old value (UO) against the new value (UN) in the CDC table. If both the old and the new value are NOT the same, then this is the updated column. If both the old and the new value are NULL, then this column was not updated.
    Before I apply a row-update to my destination table, I need to figure out the current value of those columns that have not been changed, and replace the NULL values with its current value. Otherwise, my row-update will replace with nulls those columns that its value has not been changed. This is where I am looking for an advise on best practices. Here are the possible 2 solutions I can come up with, unless you guys have a better suggestion on how to handle “updates”:
    About My Environment: My destination table(s) are part of a dimensional DW database. My only access to the source database is via Asynchronous Distributed HotLog mode. To build the datawarehouse, I will create initial mappings in OWB or ODI that will replicate the source tables into staging tables. Then, I will create another set of mappings to transform and load the data from the staging tables into the dimension tables.
    Solution #1: Use the staging tables as lookup tables when working with “updates”:
    1.     Create an exact copy of the source tables into a staging environment. This is going to be done with the initial mappings.
    2.     Once the initial DW database is built, keep the staging tables.
    3.     Create mappings to maintain the staging tables using as source the CDC tables.
    4.     The staging tables will always be in sync with the source tables.
    5.     In the dimension load mapping, “join” the staging tables, and identify “inserts”, “updates”, and “deletes”.
    6.     For “updates”, use the staging tables as lookup tables to get the current value of the column(s) that have not been changed.
    7.     Apply the updates in the dimension tables.
    Solution #2: Use the dimension tables as lookup tables when working with “updates”:
    1.     Delete the content of the staging tables once the initial datawarehouse database has been built.
    2.     Use the empty staging tables as a place to process the CDC records
    3.     Create mappings to insert CDC records into the staging tables.
    4.     The staging tables will only contain CDC records (i.e. new records, updated records, and deleted records)
    8.     In the dimension load mapping, “outer join” the staging tables, and identify “inserts”, “updates”, and “deletes”.
    5.     For “updates”, use the dimension tables as lookup tables to get the current value of a column(s) that has not been changed.
    6.     Apply the updates in the dimension tables.
    Solution #1 uses staging tables as lookup tables. It requires extra space to store copies of source tables in a staging environment, and the dimension load mappings may take longer to run because the staging tables may contain many records that may never change.
    Solution #2 uses the dimension tables as both the lookup tables as well as the destination tables for the “updates”. Notice that the dimension tables will be updated with the “updates” AFTER they are used as lookup tables.
    Any other approach that you guys may suggest? Do you see any other advantage or disadvantage against any of the above solutions?
    Any comments will be appreciated.
    Thanks.

    hi,
    can you please tell me how to make the JDBC call. I triedit as:
    1. TopicConnectionFactory tc_fact = AQjmsFactory.getTopicConnectionFactory(host, SID, Integer.parseInt(port), "jdbc:oracle:thin");
    and
    2. TopicConnectionFactory tc_fact = AQjmsFactory.getTopicConnectionFactory(host, SID, Integer.parseInt(port), "thin");
    -as given in http://www.acs.ilstu.edu/docs/oracle/server.101/b10785/jm_opers.htm#CIHJHHAD
    The 1st one is giving the error:
    Caused by: oracle.jms.AQjmsException: JMS-135: Driver jdbc:oracle:thin not supported
    at oracle.jms.AQjmsError.throwEx(AQjmsError.java:330)
    at oracle.jms.AQjmsTopicConnectionFactory.<init>(AQjmsTopicConnectionFactory.java:96)
    at oracle.jms.AQjmsFactory.getTopicConnectionFactory(AQjmsFactory.java:240)
    at com.ivy.jms.JMSTopicDequeueHandler.init(JMSTopicDequeueHandler.java:57)
    The 2nd one is erroring out:
    oracle.jms.AQjmsException: JMS-225: Invalid JDBC driver - OCI driver must be used for this operation
    at oracle.jms.AQjmsError.throwEx(AQjmsError.java:288)
    at oracle.jms.AQjmsConsumer.dequeue(AQjmsConsumer.java:1307)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:1028)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:951)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:929)
    at oracle.jms.AQjmsConsumer.receive(AQjmsConsumer.java:781)
    at com.ivy.jms.JMSTopicDequeueHandler.receiveMessages(JMSTopicDequeueHandler.java:115)
    at com.ivy.jms.JMSManager.run(JMSManager.java:90)
    at java.lang.Thread.run(Thread.java:619)
    Is anything else beyond this is required??? please help. :(
    oracle: 10g R4
    linux environment and java is trying to do AQjmsFactory.getTopicConnectionFactory(...); Java machine is diffarent from the database and no oracle client is to be installed on java machine.
    The same code is working fine when i use oc8i instead of thin drivers and run it on db machine.
    ravi

  • Best practice - updating figure numbers in a file, possibly to sub-sub-chapters

    Hi,
    I'm a newbie trying to unlearn my InDesign mindset to work in FrameMaker. What is best practice for producing figure numbers to accompany diagrams throughout a document? A quick CTRL+F in the Framemaker 12 Help book doesn't seem to point me in a particular direction. Do diagrams need to be inserted into a table, where there is a cell for the image and a cell for the figure details in another? I've read that I should  use a letter and colon in the tag to keep it separate from other things that update, e.g. F: (then figure number descriptor). Is there anything else to be aware of, such as when resetting counts for chapters etc?
    Some details:
    Framemaker12.
    There are currently 116 chapters (aviation subjects) to make.
    Each of these chapters will be its own book in pdf form, some of these chapters run to over 1000 pages.
    Figure number ideally takes the form: "Figure (a number from one of the 1-116 chapters used) - figure number" e.g. "Figure 34 - 6." would be the the 6th image in the book 'chapter 34'.
    The figure number has to cross reference to explaining text, possibly a few pages away.
    These figures are required to update as content is added or removed.
    The (aviation) chapter is an individual book.
    H1 is the equivalent of the sub-chapter.
    H2 is the equivalent of the sub-sub-chapter.
    H3 is used in the body copy styling, but is not a required detail of the figure number.
    I'm thinking of making sub-chapters in to individual files. These will be more manageable on their own. They will then be combined in the correct order to form the book for one of these (1 of 116) subject chapters.
    Am I on the right track?
    Many thanks.
    Gary

    Hi,
    Many thanks for the link you provided. I have implemented your recommendation into my file. I have also read somewhere about sizing anchored frames to an imported graphic using 'esc' + 'm' + 'p'.
    What confuses me, coming from InDesign is being able to import these graphics at the size they were made ( WxH in mm at 300ppi) and keeping them anchored to a point in the text flow.
    I currently have 1 and 2 column master pages built. When I bring in a graphic my process is:
    insert a single cell table on the next space after current text > drop the title below the cell > give the title a 'figure' format. When I import a graphic it either tries to fit it in the current 2 column layout with only part of it showing in a box which is half the width of a single column!
    A current example: page 1 (2 column page) the text flows for 1.5 columns. At the end of the text I inserted a single cell table, then imported and image into the cell.
    Page 2 (2 column page) has the last line of page 1's text in the top left column.
    Page 3 (2 page column)  has the last 3 words of page 1 in its top left column.  The right column has the table in it with part of the image showing. The image has also bee distorted, like it's trying to fit. These columns are 14 cm wide, the cell is 2 cm wide at this point. I have tried to give cells for images 'wider' attributes using the object style designer but with no luck.
    Ideally I'm trying to make 2 versions. 1) an anchored frame that fits in a 1 column width on a 2 column width page. 2) An anchored frame that fits the full width of my landscape pages (minus some border dimension),  this full width frame should be created on a new proceeding page. I'd like to be able drop in images to suit these different frames with as much automation as possible.
    I notice many tutorials tell you how to do a given area of the program, but I haven't been able to find one that discusses workflow order. Do you import all text first, then add empty graphic boxes and/or tables throughout and then import images? I'm importing text from Word,  but the images are separate, having been vectored or cleaned up in Photoshop - they won't be imported from the same word file.
    many thanks

  • Best Practices - Update Explorer Properties of BW Objects

    What are some best practices of using the process chain type "Update Explorer Properties of BW Objects"?
    We have the option of updating Conversion Indexes, Hierarchy Indexes, Authorization Indexes, and RKF/CKF Indexes.
    When should we run each update process?
    Here are some options we're considering:
    Conversion Indexes - Run this within InfoCube load process chains that contain currency conversions within explorer objects.
    Hierarchy Indexes - When would this need to be run? Does this need to be run for PartProviders and/or Snapshots? Do ACRs handle this update? Should this be run within InfoCube load chains, or after ACRs?
    Authorization - We plan to run this a couple times a day for all explorer objects.
    RKF/CKS - Does this need to run after InfoCube loads? With PartProvider and/or Snapshot indexes? After transports have completed?
    Thanks,
    Cote

    Does anyone productively use explorer and this process type for process chains?

  • Best practice updating plots ?

    Folks -
    I'm looking for best practice advice, better yet point-me-to-the-FAQ.  What's the one-true-Labview-way to keep a stacked plot of a waveform chart updated ?  I've got a main loop consisting of a flat sequence, the first two frames of which  which may be updating either of two 1-D arrays. There is  a time axis common to both.  I need both plotted soon (1-2 sec) after the update happens. RIght now, the three arrays are just shared variables , written in the subvis , while the plot is outside the flat sequence , inside the Until-stop. I put the three together into a waveform, but I'm not at all sure this is good practice . Advice ?
    thanks
    Alex
    Attachments:
    OUTLINE-PPS-V2.vi ‏74 KB

    Thanks 10^6 . I am confused but have a hardware blockage to events (6133) , and can't find coherent guidance from NI on the one true path to Labview goodness, only asking stupid questions.
    Whatever you are doing to the chart data does NOT create multiple traces, you create a single waveform, but writing the y data twice, the lower set simply overwriting the data wired higher up.
    Ahh, thanks. Tried to find documents on how to build a waveform of multiple plots. the NI examples I've found don't have time axes.  I can't find one summary document about plots, so have to try things until they work. XY charts did but Could not reliably update. Tryng to 
    Never hide an event structure inside a long sequence. If you would press the "commit" button during a time where the code is elsewehere, you might lock up the front panel forever.
    I was afraid of that , intended on disabling the commit button except when needed (second frame) .  
     Among my problems are : in a 2 minute period, I wait for a switch signal external to me. That must start a sequence of waiting for another switch to close and checking a file frequently for updates. if the operator likes that new file, then commit (copy to another SV array). The second signal is my cue to set up and arm a few usb and pci digitizers. then about 30 sec of things happen in a sequence. If I could get events out of the 6133, a state machine would be possible, but NI says no, gotta poll.
    What is the point for all these network shared variables? 
    Bind variables from subvi's  to indicators.
    What else needs to access those? Any other remote code?
    A few are actual Network SV from elsewhere, or are local copies I make and then need on indicators.
    In any case, I recommend to rearchitect this entire thing as a plain state machine. One outer loop. One case structure, and each frame a state of a single case structure. Now you only need once instance of each variable.
    Yeah, that was the plan until I found out that 6133 doesn't support events. Need to try harder to re-arrange .
    thanks again.

  • Best practices for OWB worst behavior

    Is there a comprehensive troubleshooting guide for OWB? Any guidelines for investigating OWB problems?
    Edited by: user11835116 on Aug 27, 2009 1:17 PM

    To Nico,
    Thank you for your information - a debugger is certainly a helpful tool. But I am not sure it would helped me in finding a problem with a job that processes a huge number of data where only little of them actually cause run time errors. I am not sure you always can figure out that tiny part of data that would make a debugging sessioin reveal the problem.
    All
    I still wonder what are general comprehensive guidelines for investigating a production problem. We had a case when OWB job terminated because of number of errors exceeded the limit. We had all kinds of errors: "no data found", "zero length identifier", "insufficient privileges". I am new to OWB but I was lucky to have help of an experienced OWB developer on the one hand and a person who knows data on the other. The latter discovered that only data with a particular range of a particular attribute were causing the problem. Together we went through code map by map and found a function that processes that particular kind of data and caused the “no data found” problem. We found the bug because my coworker has good knowledge of data. I am new to OWB and without assistance of that experienced developer I would have no clue how to search for the bug. This is where I am coming with my question about comprehensive roadmap for debugging a run time error. I am looking for guidelines that work when you do not know the data, general guidelines that tell you where to start and how to proceed - some general sequence of steps.
    Edited by: user8770177 on Aug 31, 2009 12:06 PM

  • Best Practice for Software Update Structure?

    Is there a best practice guide for Software Update Structure?  Thanks.  I would like to keep this neat and organized.  I would also like to have a test folder for updates with test group.  Thanks.

    Hi,
    Meanwhile, please refer to the following blog get more inspire.
    Managing Software Updates in Configuration Manager 2012
    http://blogs.technet.com/b/server-cloud/archive/2012/02/20/managing-software-updates-in-configuration-manager-2012.aspx
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Best Practice for CQ Updates in complex installations (clustering, replication)?

    Hi everybody,
    we are planning a production setup of CQ 5.5 with an authoring cluster replicating to 4 publisher instances. We were wondering what the best update process looks like in a scenario like this. Let's say, we need to install the latest CQ 5 Update - which we actually have to -:
    Do we need to do this on every single instance, or can replication be utilized to distribute updates?
    If updating a cluster - same question: one instance at a time? Just one, and the cluster does the rest?
    The question is really: can update packages (official or custom) be automatically distributed to multiple instances? If yes, is there a "best practice" way to do this?
    Thanks for any help on this!
    Henning

    Hi Henning,
    The CQ5.5 servicepacks are distributed as CRX packages. You can replicate these packages and on the publishs they are unpacked and installed.
    In a cluster the situation is different: You have only 1 repository. So when you have installed the servicepack on one node, the new versions of bundles and other stuff is unpacked to the repository (most likely to /libs). Then the magic (essentially the JcrInstaller) takes care, that the bundles are extracted to started.
    I would not recommend to activate the service pack in a production environment, because then all publishs will be updated the same time. And as a restart is required, you might encounter downtimes. Of course you can make it work when you play with the replication agents :-)
    cheers,
    Jörg

  • Best Practice for Expired updates cleanup in SCCM 2012 SP1 R2

    Hello,
    I am looking for assistance in finding a best practice method for dealing with expired updates in SCCM SP1 R2. I have read a blog post: http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    I have been led to believe there may be a better method, or a more up to date best practice process in dealing with expired updates.
    On one side I was hoping to keep a software update group intact, to have a history of what was deployed, but also wanting to keep things clean and avoid issues down the road as i used to in 2007 with expired updates.
    Any assistance would be greatly appreciated!
    Thanks,
    Sean

    The best idea is still to remove expired updates from software update groups. The process describes in that post is still how it works. That also means that if you don't remove the expired updates from your software update groups the expired updates will
    still show...
    To automatically remove the expired updates from a software update group, have a look at this script:
    http://www.scconfigmgr.com/2014/11/18/remove-expired-and-superseded-updates-from-a-software-update-group-with-powershell/
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • Best practice for updating ATWRT (Characteristic Value) in AUSP

    I've notice that when we change the Characteristic Value of a Classification, that it does not update in the MM record. We have to go into MM02 for each material number that references Char Value and manually change it for row in AUSP to get updated.
    Should i just create a report to Loop through and update table AUSP directly? Or is there a better way to do this via a function or BAPI etc? Wanting to know what best practice is recommended.

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best practice for auto update flex web applications

    Hi all
    is there a best practice for auto update flex web applications, much in the same way AIR applications have an auto update mechanism?
    can you please point me to the right direction?
    cheers
    Yariv

    Hey drkstr
    I'm talking about a more complex mechanism that can handle updates to modules being loaded into the application ect...
    I can always query the server for the verion and prevent loading from cach when a module needs to be updated
    but I was hoping for something easy like the AIR auto update feature

  • Not a question, but a suggestion on updating software and best practice (Adobe we need to create stickies for the forums)

    Lots of you are hitting the brick wall in updating, and end result is non-recoverable project.   In a production environment and with projects due, it's best that you never update while in the middle of projects.  Wait until you have a day or two of down time, then test.
    For best practice, get into the habit of saving off your projects to a new name by incremental versions.  i.e. "project_name_v001", v002, etc.
    Before you close a project, save it, then save it again to a new version. In this way you'll always have two copies and will not loose the entire project.  Most projects crash upon opening (at least in my experience).
    At the end of the day, copy off your current project to an external drive.  I have a 1TB USB3 drive for this purpose, but you can just as easily save off just the PPro, AE and PS files to a stick.  If the video corrupts, you can always re-ingest.
    Which leads us to the next tip: never clear off your cards or wipe the tapes until the project is archived.  Always cheaper to buy more memory than recouping lost hours of work, and your sanity.
    I've been doing this for over a decade and the number of projects I've lost?  Zero.  Have I crashed?  Oh, yeah.  But I just open the previous version, save a new one and resume the edit.

    Ctrl + B to show the Top Menu
    View > Show Sidebar
    View > Show Staus Bar
    Deactivate Search Entire Library to speed things up.
    This should make managing your iPhone the same as it was before.

  • Best Practice on Updating From a DB

    Hi Everyone,
    What are some best practices surrounding getting data from an oracle database into the cache layer when a data change event (insert, update, delete) happens? I've searched far and wide and the best answer I can find is to use Extractor/Replicator -> JMS -> Subscriber -> cache.
    Thank you for your help.

    You're right, DCN is interesting idea, but it's again the case where technology is working on simple Hello World things, but fails to deliver on real word.
    To me DCN looks like an unfinished Oracle project, lot of marketing stuff, but poor features, it's good mostly to student's works or testlabs, but not for real world complexity.
    Two reasons:
    1.DCN has severe limitations on complexity of joins and queries in case you plan to use query change notification feature.
    2. it puts too bug pressure on database by creating a tons on events, when you don't need and don't expect them, because it's too generic.
    Instead of DCN, create ordinary Oracle AQ queues, using tiny SQL object type event as a payload, then create triggers and/or PL/SQL stored procedures, which ale filling the event with all the primary keys you need and the unique ID of the object you need to extract.
    Triggers will filter out unnesessary updates, sending events only when you wish.
    If conditions are too complex for triggers, you may create & place events either by call from the event source app itself or on scheduled basis, it's entirely up to you. Also, technique with creating object views + using instead of trigger on this object view works pretty well.
    And finally, implement listener at Coherence side, which will be reading the event, making necessary extracts & assemble Java object ready to be placed into the cache, based on the event ID and set of event's primary keys. After Java object will be assembled, you can place it into the cache.
    Don't use Hibernate, TopLink or any other relational-to-object frameworks, they're too slow and add excessive and unnecessary overhead to the process, use standard Oracle database features, they're much faster and transaction-safe. Usage of these frameworks within 10g or 11g database is obsolete and caused mainly by lack of knowledge among Java developers about database features on this regard.
    In order to make a whole system fail-safe and scalable, you have to implement listener in fail-safe fashion, in a form of workmanager + slave processes, spawned on the other nodes.Work manager has to be auto fail-safe and auto scalable, so that if the node holding work manager instance fails due to cache cluster member departure or reset or smth else, another workmanager is automatically spawned on first available node.
    Also, workmanager should spread & synchronize the work among the slave listener processes based on the current cache cluster members, automatically re-balancing and recovering work in case of cache member join/departure.
    Out-of-the box Coherence has an implementation of workmanager, but it's not fail-safe and does not provide automatic scale-up/recover work features described above, so you have to implement your own.
    All the features I've described are implemented and happily used in complex OLTP + workflow system backed up by big Oracle RAC cluster with huge workload, processing millions transactions per day.

  • Best practice how to retrieve & update data w/o any jsf-lifecycle-overhead

    I have a request scoped jsf managed bean called "ManagedBean". This bean has a method annotated with "@PostConstruct" that retrieves data from a database. The data is shown in a jsp "showAndEditData.jsp" in <h:inputText /> components - so the data is editable.
    The workflow is as follows:
    First, when navigating to "showAndEditData.jsp", the ManagedBean is created, the "@PostConstruct"-method is invoked, and the data retrieved from the database is shown to the user.
    Second, the user changes the data.
    Third, the user presses the submit button, the ManagedBean is created again, the "@PostConstruct"-method is invoked again, and the data is retrieved from the database again. Then the data is overridden by the changes the user made and passed to the business-tier (where it will be saved to the database).
    Every step that i marked with "*again*" is completely unneccessary and a huge overhead.
    Is there a way to prevent these unneccessary steps.
    Or asking in other words: Is there a best practice how to retrieve and update data efficently and without any overhead using JSF?
    I do not want to use session scoped managed beans, because this would be a huge overhead as well.

    The first "again" is neccessary, because after successfull validation, you need new object in request to store the submitted value.
    I agree to the second and third, really unneccessary and does not make sense.
    Additionally I think it�s bad practice putting data in session beansTotal agree, its a disadvantage of JSF that we often must use session.
    Think there is also an bigger problem with this.
    Dont know how your apps are working, my apps start an new database transaction per commit on every new request.
    So in this case, if you do an second query on postback, which uses an different database transaction, it could get different data as for the inital request.
    But user did his changes <b>accordingly</b> to values of the first snapshot during the inital request.
    If these values would be queried again on postback, and they have been changed meanwhile, it becomes inconsistent, because values of snapshot two, do not fit to user input.
    In my opionion zebhed has posted an major mistake in JSF.
    Dont now, where to store the data, perhaps page scope could solve this.
    Not very knowledge of that section, but still ask myself, if this data perhaps could be stored in the components and on an postback the data are rendered from components + submittedvalues instead of model.

Maybe you are looking for

  • Firefox will not start and profile manager will not start

    This has happened on two accounts on my laptop. I am running windows 7, 64 bit and firefox 18.0. Firefox had been running on this machine for several years without incident, but suddenly when I try to start firefox, it will not start. The little wait

  • Different No. Ranges for the GR in  one client.

    The rquirement is as follows: We have two different company codes in one client . These two company codes have their respective plants assigned to it. The GR No. range should be different for different plants as this is the requirement for the client

  • Running Oracle on an non intel processor

    I got a AMD K6II-400, whilst running svrmgrl it dumped a core file. file core revealed (don't have the message here) something like .... LSB 32-bit .................. Intel 80386 version 1.0 does this mean that ORACLE won't run on this processor or i

  • EJB3 Dependency Injection

    hi, I am using ejb3 with adf managed bean. when i inject ejb3 from managed bean , i get only null in the ejb object. i am using Jdeveloper 11.111. my ejb project and my web project are in separate application. i add ejb jar into my web project from l

  • Action box in IW52

    hi, I have to add a additional button to the existing action box in  IW52. SPRO plant maintenance and customer servoce -> maintenance and service processing -> maintenance and service notification -> notification processing -> additional function ->