Best practice: bulk update (inverse of REF CURSOR SELECT)??

To move data from the database to the application, there are REF CURSORS. However, there is no easy way to move updates/inserts from a dataset back to the database.
Could someone provide some guidelines or simple examples of how to do bulk updates (and I'm talking multiple columns for multiple rows).
I guess the way to go is arraybind. Are there any guidelines on how to handle them in .Net and PL/SQL ?

You don't use the DECLARE keyword when defining stored procedures. The IS/ AS keyword is what you use instead.
CREATE OR REPLACE PROCEDURE TEST_REF
IS
  TYPE REF_EMP IS REF CURSOR RETURN EMPLOYEES%ROWTYPE;
  RF_EMP REF_EMP;
  V_EMP EMPLOYEES%ROWTYPE;
BEGIN
  DBMS_OUTPUT.ENABLE(1000000);
  OPEN RF_EMP FOR
    SELECT *
      FROM EMPLOYEES
     WHERE EMPLOYEE_ID > 100;
  FETCH RF_EMP INTO V_EMP;
  DBMS_OUTPUT.PUT_LINE(V_EMP.FIRST_NAME || ' ' || V_EMP.LAST_NAME);
  CLOSE RF_EMP;
EXCEPTION
  WHEN OTHERS
    THEN DBMS_OUTPUT.PUT_LINE(SQLERRM);
END TEST_REF;will compile. It seems a bit odd that you are opening a cursor and only fetching the first row from it. I would tend to suspect that you want to loop over every row that is returned.
Justin

Similar Messages

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

    We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
    is currently taking up to 30 hours to complete.
    Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
    is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
    get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
    I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
    it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
    So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
    Bill Thacker

    I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
    I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
    objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
    the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
    The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
    Come on SQL Server Community. Show me some love :)
    Bill Thacker

  • EBS Supplier best practice to update vendor site code, update or create a new one

    I have a question related to EBS Supplier vendor site code. Application lets you update the vendor site code, but what is the best practice to update the site code?....would you inactivate the exiting one and create a new one? or would you just update the existing value?

    Ok,
    My workaround was to put in my TaskFlow an action to commit. After that I put two more actions (execute) and then back to my page. This way works but I would like to know if there is any more efficient way to do this just when I am inserting.
    Regards

  • Using CRUD procedures to update data and ref cursors to return data

    Hi:
    I am currently evaluating Apex 3.x to replace an existing app that uses lots of procedures to update and return data.
    1. Is it possible to return data from a function that returns a cursor (or from a procedure that has an input/output ref cursor parameter for that matter) ? Example: Let's say I have the following function in a package:
    function get_data return sys_refcursor
    is
    l_cursor sys_refcursor;
    begin
    open l_cursor for select sysdate as field from sys.dual;
    return l_cursor;
    end;
    Can I add a page with a table that is populated based on this function? Based on my research it is not possible, but I want an APEX expert to confirm it
    2. The old application uses CRUD procedures to update date, that is for each table there are 3 procedures, insert update and delete. Question: is it possible to channel all the update, inserts and deletes through these procedures? Furthermore, in lots of cases I use sequences to populate the primary keys, and the new value is returned as output parameter. Can I retrieve the output value and use it maybe in the next page I am branching to?
    In the samples that I've seen the same form is used for insert and update. How do I distinguish between the two modes?
    3. Can you please point me to some samples that show how to do 1 & 2. The standard samples that I've seen use the automatic row processing.
    4. Could you please recommend some good books about Apex or HTML db? I found the documentation unintuitive. It is hard to picture quickly how things tie together by reading this documention. I wish the documentation was more task oriented and presented 'how to...' implement generic patterns used in web apps.
    Thank you in advance

    Hi guys
    Check out the last 2 posts in this thread for ideas on how to implement 1.
    Report on user data from LDAP
    Varad

  • Best practice for updating ATWRT (Characteristic Value) in AUSP

    I've notice that when we change the Characteristic Value of a Classification, that it does not update in the MM record. We have to go into MM02 for each material number that references Char Value and manually change it for row in AUSP to get updated.
    Should i just create a report to Loop through and update table AUSP directly? Or is there a better way to do this via a function or BAPI etc? Wanting to know what best practice is recommended.

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best Practice on Updating From a DB

    Hi Everyone,
    What are some best practices surrounding getting data from an oracle database into the cache layer when a data change event (insert, update, delete) happens? I've searched far and wide and the best answer I can find is to use Extractor/Replicator -> JMS -> Subscriber -> cache.
    Thank you for your help.

    You're right, DCN is interesting idea, but it's again the case where technology is working on simple Hello World things, but fails to deliver on real word.
    To me DCN looks like an unfinished Oracle project, lot of marketing stuff, but poor features, it's good mostly to student's works or testlabs, but not for real world complexity.
    Two reasons:
    1.DCN has severe limitations on complexity of joins and queries in case you plan to use query change notification feature.
    2. it puts too bug pressure on database by creating a tons on events, when you don't need and don't expect them, because it's too generic.
    Instead of DCN, create ordinary Oracle AQ queues, using tiny SQL object type event as a payload, then create triggers and/or PL/SQL stored procedures, which ale filling the event with all the primary keys you need and the unique ID of the object you need to extract.
    Triggers will filter out unnesessary updates, sending events only when you wish.
    If conditions are too complex for triggers, you may create & place events either by call from the event source app itself or on scheduled basis, it's entirely up to you. Also, technique with creating object views + using instead of trigger on this object view works pretty well.
    And finally, implement listener at Coherence side, which will be reading the event, making necessary extracts & assemble Java object ready to be placed into the cache, based on the event ID and set of event's primary keys. After Java object will be assembled, you can place it into the cache.
    Don't use Hibernate, TopLink or any other relational-to-object frameworks, they're too slow and add excessive and unnecessary overhead to the process, use standard Oracle database features, they're much faster and transaction-safe. Usage of these frameworks within 10g or 11g database is obsolete and caused mainly by lack of knowledge among Java developers about database features on this regard.
    In order to make a whole system fail-safe and scalable, you have to implement listener in fail-safe fashion, in a form of workmanager + slave processes, spawned on the other nodes.Work manager has to be auto fail-safe and auto scalable, so that if the node holding work manager instance fails due to cache cluster member departure or reset or smth else, another workmanager is automatically spawned on first available node.
    Also, workmanager should spread & synchronize the work among the slave listener processes based on the current cache cluster members, automatically re-balancing and recovering work in case of cache member join/departure.
    Out-of-the box Coherence has an implementation of workmanager, but it's not fail-safe and does not provide automatic scale-up/recover work features described above, so you have to implement your own.
    All the features I've described are implemented and happily used in complex OLTP + workflow system backed up by big Oracle RAC cluster with huge workload, processing millions transactions per day.

  • Best practice to update to Snow Leopard

    I just placed my family pack order on Amazon.com for Snow Leopard. But this will be the first time for me doing an OS upgrade on a Mac (all 4 Macs in my house came with Leopard on them so we've only done the "software update" variety). I am a reformed PC guy so humor me!
    What is the best practice to upgrade from 10.5.8 to 10.6? On a PC, my inclination would be to back up my data, reformat the whole drive and install Windows fresh... then all my apps.. then the data. I hate that and it takes hours.
    What is the best practice way to upgrade the Mac OS?

    The best option is Erase and Install. The next best option is Archive and Install. Use the latter if you do not want to or can't erase your startup volume.
    How to Perform an Archive and Install
    An Archive and Install will NOT erase your hard drive, but you must have sufficient free space for a second OS X installation which could be from 3-9 GBs depending upon the version of OS X and selected installation options. The free space requirement is over and above normal free space requirements which should be at least 6-10 GBs. Read all the linked references carefully before proceeding.
    1. Be sure to use Disk Utility first to repair the disk before performing the Archive and Install.
    Repairing the Hard Drive and Permissions
    Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Installer menu (Utilities menu for Tiger.) After DU loads select your hard drive entry (mfgr.'s ID and drive size) from the the left side list. In the DU status area you will see an entry for the S.M.A.R.T. status of the hard drive. If it does not say "Verified" then the hard drive is failing or failed. (SMART status is not reported on external Firewire or USB drives.) If the drive is "Verified" then select your OS X volume from the list on the left (sub-entry below the drive entry,) click on the First Aid tab, then click on the Repair Disk button. If DU reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported, then quit DU and return to the installer.
    2. Do not proceed with an Archive and Install if DU reports errors it cannot fix. In that case use Disk Warrior and/or TechTool Pro to repair the hard drive. If neither can repair the drive, then you will have to erase the drive and reinstall from scratch.
    3. Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When you reach the screen to select a destination drive click once on the destination drive then click on the Option button. Select the Archive and Install option. You have an option to preserve users and network preferences. Only select this option if you are sure you have no corrupted files in your user accounts. Otherwise leave this option unchecked. Click on the OK button and continue with the OS X Installation.
    4. Upon completion of the Archive and Install you will have a Previous System Folder in the root directory. You should retain the PSF until you are sure you do not need to manually transfer any items from the PSF to your newly installed system.
    5. After moving any items you want to keep from the PSF you should delete it. You can back it up if you prefer, but you must delete it from the hard drive.
    6. You can now download a Combo Updater directly from Apple's download site to update your new system to the desired version as well as install any security or other updates. You can also do this using Software Update.

  • Best practice to update inline/publish folio?

    Hi there
    Think all is in my question
    I have an online application with an online folio and I need to update the same folio with a new version.
    What is the best practice to organize my work ?
    DId I have to continue working in Indesign with the same ID  but no update/re publish it in Folio Producer (this option scared me totally... what if mu draft goes online???)
    Did I have to recreate another folio and after testing it, publish it with the same Folio name and description ?? ( not sure it will update the same file as it is not the same)
    What is the best practice to organise me/my work/my file ?
    Thank U

    Hi there
    Think all is in my question
    I have an online application with an online folio and I need to update the same folio with a new version.
    What is the best practice to organize my work ?
    DId I have to continue working in Indesign with the same ID  but no update/re publish it in Folio Producer (this option scared me totally... what if mu draft goes online???)
    Did I have to recreate another folio and after testing it, publish it with the same Folio name and description ?? ( not sure it will update the same file as it is not the same)
    What is the best practice to organise me/my work/my file ?
    Thank U

  • Best Practice for Updating Infotype HRP1001 via Class / Methods

    I want to update an existing (custom) relationship between two positions.
    For example I want
    Position 1 S  = '50007200'
    Position 2 S =  '50007202'
    Relationship = 'AZCR'
    effective today through 99991231
    Is there a best practice or generally accepted way for doing this using classes/methods rather than RH_INSERT_INFTY ?
    If so, please supply an example.
    Thanks...
    ....Mike

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best practice for updating SL to 10.6.8

    recently purchased a 2009 iMac with Snow Leopard upgrade installed  OS 10.6
    Looks as though there are two updates; should I install both or can I install the last/latest. Appreciate being directed to best practices discussions.
    FYI I will want to install Rosetta for older applications, CS3 & CS4, that I need for old client files. Thanks.
    Ali

    Buy one. Anything you want to keep shouldn't be on only one drive; problems may occur at any time, and are particularly likely to occur during an OS update or upgrade.
    (78403)

  • Best practices for updating agents

    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.

    Originally Posted by jcw_av
    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.
    To be honest, you have to work around your other deploys, etc. The ZCM agent isn't "aware" of other deploys going on. For example, ZPM doesn't care that you're doing Bundles at the same time (you'll get errors in the logs about the fact that only one MSI can run at a time, for example). ZPM usually recovers and picks up where it left off.
    Bundles on the other hand, with System Update, are not so forgiving. Especially if you have the agents prior to 11.2.4 MU1 (cache corruption errors).
    We usually:
    a) Halt all software rollouts/patching as best we can
    b) Our software deploys (bundles) are on event: user login Typically the system update is on Device Refresh, OR scheduled time, and are device associated.
    IF possible, I'd suggest that you use WOL, system update and voila.
    Or, if no WOL available, then tell your users to leave their pc turned on (doesn't have to be logged in), on X night, and setup your system updates for that night, with the auto-reboot enabled. That worked well
    But otherwise the 3 components of ZCM (Bundles, ZPM, System Update) don't know/care about each other, AFAIK.
    --Kevin

  • Best Practices to update Cascading Picklist mapping for Account record type

    1. Most of the existing picklist values name in parent and related picklist has been modified in external app master list, so the same needs to be updated in CRMOD.
    2. If we need to update picklist value, do we need to DISABLE the existing value and CREATE a new picklist.
    3. Is there any Best Practices to avoid doing Manual Cascading picklist mapping for Account record type? because we have around 500 picklist values to be mapped with parent and related picklist.
    Thanks!

    Mahesh, I would recommend disabling the existing values and create new ones. This means manually remapping the cascading picklists.

  • SAP ASE Best Practice latest update

    Hello experts,
    just wondering if somebody already reviewed thoroughly latest guide for best practices on SAP Sybase ASE?
    I am talking about the document from note 1680803 - SYB: Migration to SAP Adaptive Server Enterprise - Best Practice (former note 1722359 - SYB: Running SAP applications on SAP ASE - Best Practice).
    The guide for normal runtime operation was merged with the guide for migration, but there are some contradictory statements.
    Apart from that the study case is again designed for server with huge memory and lot of CPU cores (so not so real case normally, I wonder who setup so often such huge servers...), I have found some inconsistencies.
    E.g. in part "Reconfigure Engines and Parallel Processing", they talk about to limit ASE engines to 16, but the command configures 32.
    alter thread pool syb_default_pool with thread count = 32, idle timeout = 2000
    No change to the previous setup for migration. Is this just typo? I understand it should be 16, and then also number of network tasks for normal operation would be 4 (as mentioned in the beginning of guide that normally you set up 1 per 3-4 engnes). If this is not typo, then number of network tasks is wrong as it should be 8.
    Also they introduced idle timeout, but only talking about ERP and possible lower value for Solman - does this mean that for BW you keep default value (which if I am not mistaken is 100)? As per ADM540 you should even decrease this timeout when SAP system is sharing server with database - I know that document is old, but is again contradictory, not saying that it is wrong, but not well explained.
    If anybody checked new version of guide, please let me know, I think it is bit messed up and is bit difficult to distinguish what you should set up for migration case and what for normal operation case.
    Thanks!
    Regards,
    Matus

    Actually, quite a few customers run with that many engines/memory.   In fact, it is difficult these days to even buy a server with less than 128GB of memory and 16 cores/32 threads.    Pretty much the only time we see less is when the install is in a VM.   Interestingly, we had comments from the first version suggesting the numbers were not realistic given the typical size of systems being deployed were much larger....    In addition, in my experience with customers on SAP systems, they were not aware of how  much memory was necessary to really support medium to large systems based on the configurations they were attempting.
    I am sorry that you feel some of the examples are contradictory.  You are correct in pointing out that the text refers to 16 engines and the example configures 32....   So yes, for that specific example, it should have been 16. 
    Secondly, not having seen ADM540, but I think there is a bit of a problem if they suggest that.   I my opinion (and I have spent a lifetime tuning ASE), the idle timeout for ERP and BW should likely both be 1000+ and 2000 is not unreasonable.   The comment in ADM540 is likely due to if ASE and a NW CI are sharing the same cores - e.g. you have a 4 core box and ASE is running on 2 cores (we will ignore threads for this discussion) and you have 30 NW worker processes - which obviously will need to bump ASE off the cpu in order to run.   This may be fine in a test/dev or even a solution manager system, but bumping ASE off the core is NOT a good thing for a production system.  In fact, I would encourage using numactl or similar to fence off the the cores used for ASE from NW worker processes if at all possible.   We have seen cases of overloaded NW installations with multiple CI instances with hundreds of worker processes each starving cpu away from ASE......sooo....I would tend to actually be a bit more than firm on suggesting that 100 is a very bad starting point.   Given the number of client side joins that SAP uses to avoid [DBMS proprietary] temp tables, it is critical that ASE's (or any DBMS) response time be minimized as much as possible.....having ASE yield the core practically as soon as it gets done processing one task (and puts it to sleep pending an IO) just really causes things to run slow.   Think of a typical query that returns 10 rows - say wide enough that each row fills 1 packet.   If the packet transmit time (and client ACK) takes more than 100 microseconds on CPU (almost a given for network interactions...as clock ticks are in nanoseconds and networking is minimally milliseconds - 1000 microseconds), ASE would yield the CPU every time it sent a packet.    When the client wanted the next packet, the OS would have to wake up the ASE process (an interrupted sleep) which is a nasty heavy weight operation.   Hence it is best for ASE to hang out on the CPU until reasonably sure that nothing more is going to happen very soon....and on current cpus...and having it run for 1-2ms (1000-2000 microseconds) shouldn't be a hardship.     If you created a separate thread pool for batch worker processes, then I could see maybe using a lower idle timeout such as 200 or 250......100 is just plain too low in my mind...it is like saying ASE is expecting an odd query every few seconds vs. a steady workload.  Basically at that level, there had better be a task in the ASE job queue or one on the way on the network already, or that engine is going to sleep.
    While I state that with regards to ADM540 itself, I have not seen the class (perhaps)...one customer did show me the notebook of a class (ASE Sys Admin) they went to and it was really targeted at non-SAP installations more than SAP installations - from a reality/experience aspect.   Part of the issue with the class the customer showed me was it borrowed liberally from the old SY classes as a starting point, but at the point the class was developed there was not a lot of experience with running SAP installations on ASE to really point out the fine tweaking areas such as idle timeout.
    However, the document was really aimed primarily at Business Suite vs. BW systems or a Solution Manager install (which are much smaller) - there are a lot of other considerations for BW the guide doesn't get into - although some of the sizing is a better start than the defaults provided by SAPINST
    The former runtime guide essentially was just merged in to the Post-Migration Steps section.
    May do a quick refresh in the near-future (due to some recent experiences), so if you have other specific examples of the text and SQL not aligning - please let me know.

  • SCCM 2012 R2 Driver - Best Practices on Updating Driver Packages?

    Example the new Surface Drivers were Release We are currently using September what is the best way to update the drivers?  If Import it shows multiple drivers old and new...  Thoughts?  Blog Post?

    No. You must always import drivers to be able to either one of the Apply Driver task types in a task sequence.
    However, you can also run a driver installer provided by the vendor as a package because the driver installer is a generic exe that does whatever its supposed to do outside the control of ConfigMgr.
    Note that although you can't use an Auto Apply Driver task in stand-alone media, you can absolutely use an Apply Driver Package in stand-alone media. In general, most folks do not rely on Auto Apply but instead rely on Apply Driver Package for multiple reason.
    Jason | http://blog.configmgrftw.com | @jasonsandys

Maybe you are looking for

  • How can I make handwriting look heavier in PSE 11?

    I have a small handwritten note that I've scanned. When I enlarged it to the size I want to print, the handwriting strokes looked too thin to suit me. How can I make the strokes look thicker? (I'm trying to change the appearance from ball point pen t

  • Value Mapping Issue in SAP XI

    Hello All, We are currently having an issue with our Value Mapping in SAP XI. We have set up Value Mappings in Integration Builder : Configuration and for some reason, some works and some doesn't work. For example, we have set up Value Mapping Scheme

  • How to define maximum base value for vehicle.

    Hi. Could you clearify me why I can't fix maximum value for vehicles (1,000,000) I already did configure as below: IMG>> FI >> Asset Accounting >> Depreciation >> Valuation methods >> Dep key >> Caculation method >> Define maximun amount method then

  • Every element of deployed Project being disabled

    Hi, after deploying my project (without any errors) the image I receive is "like" all pieces (buttons, input fields etc) were enable: false or visible: false. You get the picture. Problem is that everything is enabled, whole Root Container is enabled

  • Creating a "go back" button in Struts

    How to do that? Are there already some tags I could use in JSTL or some other template library? Or do you simply have to keep a session variable that remembers the last page you visited? BTW: how do you get the address of the page you're on? Thanks a