LOV best approach?

I am creating LOVs using backing beans in my forms. For the backing bean I am creating a seperate datasource and accessing the database for LOV data. Is there any issue with this approach ? I am skeptical using the ADF BC and a seperate datasource for fetching the LOV data. Is there any better approach ?I need the backing bean because I need to do some front end validation using the lov value. By default the lovs are taking indexes not the actual id value from the database.
Thanks
Suneesh

Why not have an AM method which returns a list and the build a LOV as per your requirement.
Why separate data source that too from backing bean/
This violates the MVC pattern.
Venkat

Similar Messages

  • Design Patterns, best approach for this app

    Hi all,
    i am starting with design patterns, and i would like to hear your opinion on what would be the best approach for this app. 
    this is basically an app for data monitoring, analysis and logging (voltage, temperature & vibration)
    i am using 3 devices for N channels (NI 9211A, NI 9215A, NI PXI 4472) all running at different rates. asynchronous.
    and signals are being processed and monitored for logging at a rate specified by the user and in realtime also. 
    individual devices can be initialized or stopped at any time
    basically i'm using 5 loops.
    *1.- GUI: Stop App, Reload Plot Names  (Event handling)
    *2.- Chart & Log:  Monitors Data and Start/Stop log data at a specified time in the GUI (State Machine)
    *3.- Temperature DAQ monitoring @ 3 S/s  (State Machine)   NI 9211A
    *4.- Voltage DAQ monitoring and scaling @ 1K kS/s (State Machine) NI 9215A
    *5.- Vibration DAQ monitoring and Analysis @ 25.6 kS/s (State Machine) NI PXI 4472
    i have attached the files for review, thanks in advance for taking the time.
    Attachments:
    V-T-G Monitor_Logger.llb ‏355 KB

    mundo wrote:
    thanks Will for your response,
    so, basically i could apply a producer/consummer architecture for just the Vibration analysis loop? or all data being collected by the Monitor/Logger loop?
    is it ok having individual loops for every DAQ device as is shown?
    thanks.
    You could use the producer/consumer architecture to split the areas where you are doing both the data collection and teh analysis in the same state machine. If one of these processes is not time critical or the data rate is slow enough you could leave it in a single state machine. I admit that I didn't look through your code but based purely on the descriptions above I would imagine that you could change the three collection state machines to use a producer/consumer architecture. I would leave your UI processing in its own loop as well as the logging process. If this logging is time critical you may want to split that as well.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • COST CENTER CHANGES FOR OPEN PO REQUIRE BEST APPROACH HOW TO DO IT

    Dear All,
    we are changing the cost center for open PO's kindly tell us the best approach how to do it for all open PO's line items for open PO's are 4000.
    we will totally block the old PO's and change the PO's with NO GR and IR immediately, but what and how to do for if there are GR, IR, or one of them their. and also what if IR and GR differences are thier, kindly provide all the best possible approaches.
    below are the scenarios.
    Open PO without GR/IR
    Open PO only with GR
    Open PO only with IR
    Open PO with IR/GR without difference
    Open PO with IR/GR with differences
    Service entry sheet
    kindly provide me all the best approaches to achieve this task. keep in mind beside reversal of GR or IR any other approach.
    qsm sap
    Edited by: qsm sap on Feb 15, 2010 12:08 PM

    Hi,
    Open PO without GR/IR
    Open PO only with GR
    Make Acct assigment as changeable at time of IR in SPRO for Acct Assigmet 'K' ...so that u can change the cost Center while doing MIRO. if u do not want to go with mass change.
    Service entry sheet
    You can change Cost center while Doing SES. No issue
    for others... reverse the IR as one option.
    Regards,
    Pardeep Malik

  • COST CENTER CHNAGES FOR OPEN PO"S BEST APPROACH

    Dear All,
    we are changing the cost center for open PO's kindly tell us the best approach how to do it for all open PO's line items for open PO's are 4000.
    we will totally block the old PO's and change the PO's with NO GR and IR immediately, but what and how to do for if there are GR, IR, or one of them their. and also what if IR and GR differences are thier, kindly provide all the best possible approaches.
    below are the scenarios.
    Open PO without GR/IR                    
    Open PO only with GR                    
    Open PO only with IR                    
    Open PO with IR/GR without difference          
    Open PO with IR/GR with differences          
    Service entry sheet
    kindly provide me all the best approaches to achieve this task.
    qsm sap

    Hi,
    Open PO without GR/IR
    Open PO only with GR
    Make Acct assigment as changeable at time of IR in SPRO for Acct Assigmet 'K' ...so that u can change the cost Center while doing MIRO. if u do not want to go with mass change.
    Service entry sheet
    You can change Cost center while Doing SES. No issue
    for others... reverse the IR as one option.
    Regards,
    Pardeep Malik

  • R/3 4.7 to ECC 6.0 Upgrade - Best Approach?

    Hi,
    We have to upgrade R/3 4.7 to ECC 6.0
    We have to do th DB, Unicode and R/3 upgrade. I want to know what are the best approaches available and what risks are associated iwth each approach.
    We have been considering the following approaches (but need to understand the risk for each approach).
    1) DB and Unicode in 1st and then R/3 upgrade after 2-3 months
    I want to understand that if we have about 700 Include Program changing as part of unicode conversion, how much of the functional testing is required fore this.
    2) DB in 1st step and then Unicode and R/3 together after 2-3 months
    Does it makes sense to combine Unicode and R/3 as both require similar testing? Is it possible to do it in 1 weekend with minimum downtime. We have about 2 tera bytes of data and will be using 2 systems fdor import and export during unicode conversion
    3) DB and R/3 in 1st step and then Unicode much later
    We had discussion with SAP and they say there is a disclaimer on not doing Unicode. But I also understand that this disclaimer does not apply if we are on single code page. Can someone please let us know if this is correct and also if doing Unicode later will have any key challenges apart from certain language character not being available.
    We are on single code pages 1100 and data base size is about 2 tera bytes
    Thanks in advance
    Regards
    Rahul

    Hi Rahul
    regarding your 'Unicode doubt"' some ideas:
    1) The Upgrade Master Guide SAP ERP 6.0 and the Master Guide SAP ERP 6.0 include introductory information. Among other, these guides reference the SAP Service Marketplace-location http://service.sap.com/unicode@sap.
    2) In Unicode@SAP can you find several (content-mighty) FAQs
    Conclusion from the FAQ: First of all your strategy needs to follow your busienss model (which we can not see from here):
    Example. The "Upgrade to mySAP ERP 2005"-FAQ includes interesting remarks in section "DO CUSTOMERS NEED TO CONVERT TO A UNICODE-COMPLIANT ENVIRONMENT?"
    "...The Unicode conversion depends on the customer situation....
    ... - If your organization runs a single code page system prior to the upgrade to mySAP ERP 2005, then the use of Unicode is not mandatory. ..... However, using Unicode is recommended if the system is deployed globally to facilitate interfaces and connections.
    - If your organization uses Multiple Display Multiple Processing (MDMP) .... the use of Unicode is mandatory for the mySAP ERP 2005 upgrade....."
    In the Technical Unicode FAQ you read under "What are the advantages of Unicode ...", that "Proper usage of JAVA is only possible with Unicode systems (for example, ESS/MSS or interfaces to Enterprise Portal). ....
    => Depending on the fact if your systems support global processes, or depending on your use of Java Applications, your strategy might need to look different
    3) In particular in view of your 3rd option, I recommend you to take a look into these FAQs, if not already done.
    Remark: mySAP ERP 2005 is the former name of the application, which is named SAP ERP 6.0, now
    regards, and HTH, Andreas R

  • I have a MacBook Pro 5,4 running OSX 10.6.8 and Safari 5.1.10. A website i like has a known bug with 5.1.10 and recommends I install a newer version of Safari or use Firefox or Chrome. Just looking for advice on the best approach. Thanks!

    I have a MacBook Pro 5,4 running OSX 10.6.8 and Safari 5.1.10. A website i like has a known bug with 5.1.10 and recommends I install a newer version of Safari or use Firefox or Chrome. Just looking for advice on the best approach. Thanks!

    Unfortunately, Safari cannot be updated past 5.1.10 on a Mac running v10.6.8.
    So, the options are to upgrade to a newer OS X or use Firefox or  Chrome.
    Be aware, Apple no longer support Snow Leopard v10.6 >  www.ibtimes.com/apple-kills-snow-leopard-os-x-106-no-longer-receives-security-u pdates-1558393
    See if your Mac can run v10.9 Mavericks >  OS X Mavericks: System Requirements
    If so, you can download and install Mavericks for free from the App Store.
    Read prior to upgrading >   Upgrading to 10.7 and above, don't forget Rosetta! | Apple Support Communities

  • What is the best approach to handle multiple FK with single table.

    If two tables are joined with each other with more than one ways, for example
    MAIN table is (col1, col2,....coln, person_creator_id, person_modifier_id)
    PERSON table is (person_id, name, address,........ phone) etc
    At database level PERSON_CREATOR_FK and PERSON_MODIFIER_FK are defined.
    Objective is to create a report that shows
    col1, col2...coln, person creator name, person modifier name
    If above two objects are imported with FKs in a EUL and discoverer plus is used to create above report. On first inclusion of person name discoverer plus will ask you to pick the join (provided the checkbox to disable this feature is not checked). Once you pick 'person creator' join it will never allow you to pick person modifier name.
    One solution is two create a custom folder with query like
    select col1, col2,...coln,
    pc.name, pc.address,.... pc.phone
    pm.name, pm.address,.... pm.phone
    from main m,
    person pc,
    person pm
    where m.person_id_creator = pc.person_id
    and m.person_id_modifier = pm.person_id
    Second solution is to import the PERSON folder twice in EUL (optionally named one as perosn_creator and other as person_modifier) and manually define one join per table. i.e. join MAIN with PERSON_CREATOR on person_creator_fk and join MAIN with PERSON_MODIFIER table using person_modifier_fk.
    Now discoverer plus will let you drag Name from each person folder without needing to resolve multiple joins.
    Question is, what approach is better OR is there a better way?
    With solution 1 you will not be able to use functions on folder items.
    With solution 2 there is a EUL design overhead of including same object multiple times and then manually defining all join (or deleting unwanted joins), and this could be a problem when you have person_modifier and person_creator in nearly all tables. It could be more complicated if person table is further linked other tables and users want to see that information too. (for instance, if person address is stored in LOCATION table joined with location_id and user want to see both creator address and modifier address....now you will have to create multiple LOCATION folders).
    A third solution could be to register a function in discoverer that return person name when person_id is passed. This will work perfectly for above requirement but a down side is the report will run slower if they need filters on person names (then function will be used in where clause). Also, this solution is very specific to above scenario, it will not work if you want the report developer the freedom to pick any attribute from person table (lets say, person table contain 50 attributes then its not a good idea to register 50 functions).
    Any comments/suggestion will be appreciated.
    thanks

    Hi
    In a roundabout way you have really answered your own question :-)
    In my opinion, the best approach, although by all means not the only approach - see below) would be to have the object loaded as two folders with one join going to the first folder and the second join to the other folder. You would of course name the folders appropriately.
    Here's a workflow that I use all of the time and one that I teach when I'm giving Discoverer Administrator training. It might help you:
    1. Bring in the PERSON folder to begin with
    2. Make all necessary adjustments to bring it up to deployment standard. These adjustments would be: folder name (E.g PERSON_CREATOR), item names, item placement, default positions, default aggregation and so on.
    3. Create or assign the required lists of values
    4. Create any required calculations
    5. Create any required conditions
    6. Create the first join from this folder to MAIN.
    7. Click on the heading for the folder and press CTRL-C.
    8. Click on the heading for the business area and press CTRL-V. A second copy of the folder, complete with all of the adjustments you made earlier will be inserted into the business area.
    Note: joins are not copied, everything else is.
    9. Rename this folder to say PERSON_MODIFIED
    10. Rename the items as appropriate
    11. Add a join from this folder to MAIN - you're done
    Other ideas that I have used and work well would be to use a database view or create a complex folder. Either will work, In both cases you would need to join on some other column other than the ones you referred earlier.
    I hope this helps
    Best wishes
    Michael

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • Best approach for IDOC - JDBC scenario

    Hi,
    In my scenarion I am creating sales order(ORDERS04) in R/3 system and which need to be replicated in a SQL Server system. I am sending the order to XI as an IDoc and want to use JDBC for sending data to SQL Server. I need to insert data in two tables(header & details). Is it possible without BPM?  Or what is the best approach for this?
    Thanks,
    Sri.

    Yes, this is possible without the BPM.
    Just create the Corresponding Datatype for the insertion.
    if the records to be inserted are different, then there wil be 2 different datatypes ( one for header and one for detail).
    Do a mutlimapping, where your Source is mapped into the header and details datatype and then send using the JDBC sender adapter.
    For the strucutre of your Datatype for insertion , just check this link,
    http://help.sap.com/saphelp_nw04/helpdata/en/7e/5df96381ec72468a00815dd80f8b63/content.htm
    To access any Database from XI, you will have to install the corresponding Driver on your XI server.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3867a582-0401-0010-6cbf-9644e49f1a10
    Regards,
    Bhavesh

  • Best approach to return Large data ( 4K) from Stored Proc

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

  • Best approach to "migrate" from BEX reports to Webi reports ?

    Hello,
    i have read lots of documents regarding best practices on how to built webi reports and universes on top of BW.
    But i can't find any document about best approach, not in performance way of thinking but in best way of using reports.
    i mean: when end users are coming from bex reports (where they can drill down through hierarhies and use free filters ) to webi reports (where layout is quite beatiful and user can change it easely), this is not the same way of consuming reports.
    I come from BO world and are new on reporting on top of BW.
    for me webi is good for quite static layout reporting where data is clear and available.of course you can have prompts for interactivy and more accurate reporting.  Drill down is just a functionality but is not the real purpose of the report tool.
    So ,according to me there is a gap between both tools (BEX and WEBI) but end users are the same.
    So i 'm wondering if you have any feedback for the best approach to build webi reports where end users are coming from bex reporting.
    And how to choose between prompts, drill-down (with available filters on top of the window), fold/unfold and input controls or just having diffrent levels of hierarchies in the table/ section/ breaks but without drill down (because if you drill down, report begins weird with diffrent levels) ...?
    So , if you have any feedback , advise....
    thanks in advance,
    Rgds,

    Hi,
      WEBI don't replace BEX reports, is for different audience, in fact BEX is for OLAP reports and analysis.
      You can find some answer in this page
    [FAQ: The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI & Business Objects Roadmap|FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap [original link is broken]|FAQ]
    spercific for What is the future of the BEx Query Designer? you can read here
    [FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap [original link is broken]#section11] and here [FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap [original link is broken]#section3]
      The idea is to use the rigth tool for the rigth job.
      You can find more information here [http://www.sdn.sap.com/irj/sdn/edw], [http://www.sap.com/solutions/sapbusinessobjects/index.epx], [http://www.sap.com/solutions/sapbusinessobjects/newsevents/index.epx], [http://www.sap.com/community/flash/BusinessIntelligenceAGuideforMidsizeCompanies.pdf]
    I hope this help you.
    Best regards.

  • Best approach -To create RTF template having more than 50 tables.

    Hi All,
    Need your help.I am new to BI publisher. Currently we are using BIP 11g.
    I want to develop.rtf template having lots of layout and images.
    Data is coming from different tables (example : pulling from around 40 tables). When i tried to pull data from 5 tables by joining tables. It takes more time using data model in BI publisher 11g saved in xml and used in word doc.
    Could you please suggest best approach  weather i need to develop .rtf template via data model or query to generate a report.
    Also please suggest / guide me .
    Regards & Thanks in advance.

    it's very specific requirements
    first of all it's relate to logic behind
    as example 50 tables are related ? or 50 independent tables ? or may be 5 related and another independent ?
    based on relation of tables you create sql statement(s)
    how many sql statement(s) you'll have lead to identify ways to get data, as example, by package or trigger etc
    kim size of resulting select statement(s)
    if size say 1mb it's must be fast to get report but for 1000mb it can consume many time
    also kim what time it's not only to select data but to merge data and template
    looks like experimenting and knowing full logic of report is only ways to get needed output in projection of data and time

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • Best approach to add Z custom field to IC Agent Inbox search and results view

    Hi Experts,
    We are having a requirement to add a Z custom field to IC Agent Inbox search and results view. I got multiple forums and ideas, but looking for the best approach for handling this. I am sure, you experts, would have already done this.
    Thanks in advance.
    Regards
    Siva

    Hi Sivakumar,
    AET is the best way by far to create a custom field in this area. It is easy and simple.
    Also, field once added in one business object it can be used at different objects as well.
    There is also a demo available for AET on sdn.
    Please let me know if any more help is required.
    Thanks,
    Bhushan

  • What's the best approach to resetting Calendar data on Server?

    I have a database format error in a calendar that I only noticed after the migration to Server on Yosemite.  I'll paste a snippet from the Error Log in at the bottom that shows the error - I've highlighted the description of the problem in red.
    I found a pretty cool writeup from Linc in a different thread, but it's aimed at fixing a similar problem for a local user on their own machine rather that an iCal server like what we're running.  Here's the link to that thread: Re: Calendar crashes on open  For example, does something like Calendar Cleaner work on our server database as well?
    In my case I think I'd basically like to gracefully remove all the Calendar databases from Server and start fresh (all the users' calendars are backed up on their local machines, so they can just import them into fresh/empty calendars once I've cleaned out the old stuff).  Any thoughts on "best approach" would be much appreciated.
    Here's the error log...
    File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twi sted/internet/defer.py", line 1099, in _inlineCallbacks
    2015-01-31 07:14:41-0600 [-] [caldav-0]         result = g.send(result)
    2015-01-31 07:14:41-0600 [-] [caldav-0]       File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python 2.7/site-packages/txdav/caldav/datastore/sql.py", line 3635, in component
    2015-01-31 07:14:41-0600 [-] [caldav-0]         e, self._resourceID
    2015-01-31 07:14:41-0600 [-] [caldav-0]     txdav.common.icommondatastore.InternalDataStoreError: Data corruption detected (Invalid property: GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     VERSION:2.0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CALSCALE:GREGORIAN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     PRODID:-//Apple Inc.//Mac OS X 10.8.2//EN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTART:20121114T215900Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTEND:20121114T232700Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CLASS:PUBLIC
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CREATED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DESCRIPTION:Flight leg 2 of 2 for trip from MSP to LAX\\nhttp://www.google.
    2015-01-31 07:14:41-0600 [-] [caldav-0]      com/search?q=US+29+flight+status\\nBooked on November 8\\, 2012\\n
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTAMP:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LAST-MODIFIED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LOCATION:Sky Harbor International Airport\\, Phoenix\\, AZ
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SEQUENCE:0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     STATUS:CONFIRMED
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SUMMARY:US 29 from PHX to LAX
    2015-01-31 07:14:41-0600 [-] [caldav-0]     URL:http://www.hipmunk.com/flights/MSP-to-LAX#!dates=Nov14,Nov17&group=1&s
    2015-01-31 07:14:41-0600 [-] [caldav-0]      elected_flights=96f6fbfd91,be8b5c748d;kind=flight&locations=MSP,LAX&dates=
    2015-01-31 07:14:41-0600 [-] [caldav-0]      Nov14,Nov16&group=1&selected_flights=96f6fbfd91,
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-CALENDARSERVER-PERUSER-UID:D0737009-CBEE-4251-A288-E6FCE5E00752
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRANSP:OPAQUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACKNOWLEDGED:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACTION:AUDIO
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ATTACH:Basso
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRIGGER:-PT2H
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-APPLE-DEFAULT-ALARM:TRUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-WR-ALARMUID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ) in id: 3405
    2015-01-31 07:14:41-0600 [-] [caldav-0]    
    2015-01-31 07:16:39-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None
    2015-01-31 08:08:40-0600 [-] [caldav-1]  [AMP,client] [calendarserver.tools.purge#warn] Cleaning up future events for principal A95C9DB2-9757-46B2-ADF6-4DECE2728820 since they are no longer in directory
    2015-01-31 08:09:10-0600 [-] [caldav-1]  [-] [twext.enterprise.jobqueue#error] JobItem: 39, WorkItem: 762001 failed: ERROR:  canceling statement due to statement timeout
    2015-01-31 08:09:10-0600 [-] [caldav-1]    
    2015-01-31 08:13:40-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None

    <facepalm>  Well, there you go.  It turns out I was over-thinking this.  The Calendar app on a Mac can manage this database just fine.  Sorry about that.  There may be an easier way to do this, but here's how I did it.
    Use the Calendar.app on a local computer to:
    - Export the corrupted calendar to an ICS file on the local computer (Calendar -> File -> Export -> Export)
    - Create a new local calendar (Calendar -> File -> New Calendar -> On My Mac)
    - Import the corrupted calendar into the new/empty local calendar (Calendar -> File -> Import...)
    - Delete years and years of old events, including the one that was triggering that error message
    - Export the (now much smaller) local calendar to another ICS file on my computer (Calendar -> File -> Export -> Export)
    - Create a new calendar on the server (Calendar -> File -> New Calendar -> in the offending server-based iCal account)
    - Import the edited/fixed/smaller/no-longer-corrupted calendar into the new/empty server calendar (Calendar -> File -> Import...)
    - Make the newly-created iCal calendar the primary calendar (drag it to the top of the list of calendars on the server)
    - Delete the old/corrupted calendar (right-clicking on the bad calendar in the calendar list - you can only delete it once it's NOT the primary calendar any more)

Maybe you are looking for

  • CD/DVD Drive issue

    Wondering if any one would know the answer to this question. My combo drive on my iBook over the past 2 days has been leaving a thin gummy line on my cd's and dvd's (the line is always in the same spot, about half way on the cd/dvd). I looked at cds/

  • The PDF  did not convert to Excel at all. Only two pages from 24

    El archivo pdf solo se convirtio parcialmente a excel de 24 paginas solo dos se convirtieron que pasa

  • Depreciated Hints

    Hi Masters, What are the hints which are depreciated in oralce 11g e.g bypass_ujvc ? Though I know this hint is undocumented, Is there any document/url in which I can get all these. Any help will be highly appreciated. Regards

  • Which method does the actual bulk fetch from database in ADF?

    Hi, I'm looking to instrument my ADF code to see where bottlenecks are. Does anyone know which method does the bulk fetch from the database so that I can override it? Thanks Kevin

  • HT5503 How do you download IOS 6.0 for IPAD

    I would like to download IOS 6.0 for my IPad 2. How can I do that?