No Data Best Practice

Hi Gurus,
What is the best practice for handling no data for your select? With SQL Server you can do an ISNULL and do an IF on that. But, with Oracle it appears you need to do a Count INTO or use the EXEPTION NO_DATA_FOUND. What is best practice? Is it one of the two above or something different? Just want to make sure we are handling this scenario properly.....it has a topic that has come up many times with our dev group.
--S                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

There is no real "best practise" - only reality and fact.
When using an implicit SQL statement in PL/SQL you tell the PL egnine that you are not interested in the cursor handle for that SQL. You do not want to fetch from that cursor handle manually. You do not want to reference the projection (columns returned) via that cursor handle.
PL says this is fine. But as you do not want to deal with the cursor handle, you also cannot deal with return codes of that cursor. So it will raise an exception when no data is returned by that SQL.
When you define an explicit cursor, the exact opposite is true - as you do define a cursor handle on the "client" side (PL is a SQL client/caller). This means that you do want to deal yourself with the actual cursor. Thus no exception will be raised when there is no data returned - as you can test that yourself using that very same cursor handle you have defined.
Thus no "best practise". Only fact. You deal with no data found exceptions when using implicit SQL cursors in PL. You deal with a cursor handle when using explicit SQL cursors in PL.

Similar Messages

  • Using WebI with SAP BW Data - Best practice for introducing BW Accelerator

    Hi,
    We have a significant investment in using BOE XI 3.1 SP2 integrated to SAP BW 7.0 EHP 1 SPS05.
    Now we intend to introduce BW Accelerator to improve the data fetch performance for the Adhoc (WebI) Analysis and the formatted reports built using WebI (Infoview).
    Data handling in question is approx. 2 Million+ records for each WebI report / adhoc analysis (20 to 30 columns).
    The solution could be BW Cubes --> BW Accelerator --> BW Queries --> BO Universe --> WebI using Infoview
    Does it really help in introducing the BW Accelerator like the case described above ?
    Understand that the BW Accelerator could improve the performance of the underlying data and hence the BW Queries do work faster; but does it really improve (9x to 10x) performance for the MDX Queries generated by the BO Universe ( BOE XI 3.1 SP2 ) & WebI.
    What is the roadmap for the future wrt BW Accelerator and SAP BI BO Integration; if we intend to use WebI ?
    Or should be migrate to BO Explorer as the front end for Adhoc Analysis ?
    Is BO Explorer able to present 1 Million + records with 20-30 columns ?
    What is the best practice / better on performance; as an integrated product / solution ?
    1) BW Cubes --> BW Accelerator --> BW Queries --> SAP Integ Kit --> BO Universe --> WebI
    2) BW Cubes --> BW Accelerator --> ??? --> BO Explorer --> ??? --> WebI ???
    3) BW Cubes --> BW Accelerator --> ??? --> BO Pioneer --> ??? --> WebI ???
    4) BW Cubes --> BW Accelerator --> ??? --> BO Explorer
    5) BW Cubes --> BW Accelerator --> ??? --> BO Pioneer
    6) BW Cubes --> BW Accelerator --> BW Queries --> SAP Integ Kit --> Crystal Reports (to handle above data volume)
    7) BW Multiproviders --> BW Accelerator --> BW Queries --> SAP Web Analyzer (to handle above data volume)
    regards,
    Rajesh K Sarin
    Edited by: Rajesh Sarin on Jan 25, 2010 4:05 PM

    Hi,
    We have a mix of Adhoc Analysis (60 %) and Formatted Reports (40%). We selected WebI as the tool for the purpose & used it for requirements which process approx. 2M records. We faced bottleneck issues on performance (we are on BO XI 3.1 SP2 & SAP BW 7.0 EHP1, SP05).
    We are further analyzing possibility to introduce BWA; if this can handle similar record processing & still preserve our investment on OLAP Universes, WebI, SAP Integration Kit & training users on WebI frontend.
    I see a lot of documentation suggesting "BO Explorer and BWA" - understand that BWA would improve the DB time and BO Explorer would help on the Front-end / OLAP time.
    Request your guidance on the road map & continuation of investment using BWA + WebI.
    regards,
    Rajesh K Sarin

  • Number ranges - Import of Legacy Data - best practice

    Hi,
    we are planning to move legacy data objects to our SAP CRM.
    These Objects have an external key (Numeric, 6 digit) in a number range that is relatively full and highly fragmented.
    Is there a best practice how to implement internal number assignment for this kind of pre filled number range?
    The internal key in SAP would be different and under our control, the external key is the interesting one.
    Cheers,
    Andreas

    Hi Luís,
    The scenario is in the context of insurance business.
    The setup: SAP CRM as central Business Partner system. And in the CRM we keep the policy numbers of the surrounding (non SAP) policy systems as references (I'm talking about insurance policies...).
    For each policy we create a one order object, containing, among others, LOB, policy type and the policy number.
    These policy number ranges are to be maintained in the central CRM system in the future.
    And in one of these Systems they have the situation described above:
    6 digit key in a number range that is relatively full and highly fragmented. They are managing their numbers in an xls right now, but we would also have them migrated into our system.
    And after the migration we would be responisble to find a unused number, whenever a new policy is to be created.
    Cheers,
    Andreas

  • Item Master Data Best Practice

    hello all
    we are now using SBO for more than a year, and yet we still always add new items in our item master data. what is the best practice on maintaining the item master data. for you to understand this is the scenario. since in the Factory/Mill there are a lot of spare parts and equipments there, if some of this equipments is damage, we have to buy a new one, here the problem occur because if it only differ in Part Numbers we use another item code for it. with this practice, at later part we found out that we have more than 1 item code for only one item because of the naming convention. so we have to hold the other item code and use the other one coz we cant delete it anymore. sometimes 1 itemcode occurrs only once in the in the item history.
    please suggest what is the best Practice on this matter.
    1. Item Grouping
    2. Naming Convention
    etc..
    NOTE:
    our goal is minimize adding of items in item master data.
    FIDEL

    FIDEL,
    From what I understand, you have to replace broken / damaged component of an item like Bulldozer, Payloader and mill turbines.  This is the reason why you defined the parts as a new item.
    From your Item code examples, I am not clear why you have 2 different names for the same item.  and also what you mean by "this two item codes are actually the same,
    If you are just buying parts to replace components and if you do not need to track them then I would suggest you create generic itemcodes in the Item master and simply change the description when you buy / sell them.
    Example:  Same Item different description.
    REPL101  OIL FILTER
    REPL101  FUEL FILTER
    REPL101  xxxxx
    This way you are not going to keep creating items in the database and also you can see the description and know what it was.
    Simply change the ItemName in the marketing document and instead of pressing Tab to move to the next column Press CTRL+Tab so that SAP does not auto check then ewly typed name against the item master.
    Let me know if your scnenario is otherwise
    Suda

  • Single Template support multiple formats of data - Best practice

    I have a requirement to create Invoices in a single PDF file.
    The invoices would belong to different categories - DebitNotes, CreditNotes, Invoices with single product,
    invoices with multiple product etc.. each will have a different format.
    I initially thought the right way to create a single pdf is to use a
    single Template, with different formats of invoice seperated by conditional formatting.
    The see from reading the blogs that the other way is to create sub-template
    (one each for credit, invoice, Debits etc) and plug it into the
    main template.
    I would like to know what is the best practice that is followed in the above case.
    If I were to use sub-templates how would I make it possible to view the invoice stub only on the first page.
    Since the data from the sub-template would go to multiple pages.
    Is adding the stub data to the footer the only option. Please can someone share with me an example
    template.
    Thanks
    Shandrila

    Shandrila
    If the various document types are of a single XML format ie the same structure with just document type differences and the layout format is the same just the data different or very minimal changes that can be handled with conditional formatting then I think it would be OK to have a single report for all document types.
    If the data structures are very different and the layout requirements are different then I would create separate reports for each document type. If the data structure is teh same but the document type layouts are different then go for separate layout formats.
    Going down the sub template path can be a little difficult, you might end up with a very complex set of templates that are almost as much of a pain to manage as the original report you are trying to replace.
    Here's the best scenario IMHO ...
    1 data extract, parameterized to pull invoice, CM, etc data based on the user request
    Multiple layout templates, 1 for each document type. If you have common layout sections across the layouts e.g address blocks then break them out as sub template components that all of the layouts can access and share.
    Multiple report definitions, sharing the data extract with a single layout template associated with them
    cheers
    Tim

  • Storing data - best practice?

    Hi,
    I wonder if there is any best practice to store data in my EP6.0 portal? For instance, in a standard website if you have a list of events, each event can be stored in a related sql-database and can then be fetched and updated whenever necessary.
    What is the best way to do developing portal content? The reason I am asking is because I want to develop a WebDynpro application where I can select a date and then display all registered events on that day in my portal.
    Best regards
    Øyvind Isaksen

    Okey, and then using a RFC call from the webdynpro application to fetch data from the sap database?
    This answered my question:
    Best regards
    Øyvind Isaksen

  • Large amount of data best practices.

    Hello Experts,
    I have an scenario where i have to extract large volume of data from SAP system to a external database using SAP PI. The process has to extract about 400.000 rows from SAP and send it to this external database. I guess the best way to insert the data to the database is using JDBC adapter but i'm wondering what's the best adapter i can use to comunicate SAP R/3 and SAP PI? What's the best way to send a message of 400.000 rows to SAP PI? Files, idocs, proxies? Could you please tell me if there's any documentation on the topic?
    Thank you in advance.

    HI,
    In your case, ClientProxy to JDBC is the best for the performance.
    Please see the link, it will explain you scenario (proxy to JDBC) in details.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e0ac1a33-debf-2c10-45bf-fb19f6e15649?quicklink=index&overridelayout=true
    Regards,
    Rajesh

  • Session Data - best practice

    Hi,
              We are designing a Servlet/JSP based application that has a web-tier
              separate from the middle tier.
              One of our apps have a lot of user inputs, average 500k and upto 2MB of data
              in the request.
              We do not have a way of breaking this application up (i.e the whole 2MB form
              data must be posted at ome time).
              We have 2 solutions and want to know what is the better one and wahy ...
              1. Use session and store all the information in the session.
              2. use Javascript to assemble all the data and submit it at one time.
              I prefer #2 because I don't want to use sessions and also becuase I don't
              want to use a database on the web-tier....
              Please help me explain to my cpollegues who are convinced that we have to
              use sessions to store this data..
              -JJ
              

    I'm not overly familar with Weblogic clustering, but I assume it is similar in concept to OC4J clustering. The thing that you need to be aware of is that any object stored in HttpSession needs to be completely serializable. The LibrarySession that you create/obtain for a user cannot be serialized. Thus you need to come up with a technique that allows a user to obtain the same librarysession instance from whatever store it may be across multiple requests.
    CM SDK, Files, Content Services typically achieve high availability through use of multiple midtiers with Big IP in front. Our out-of-box applications do not make use of OC4J clustering.
    thanks,
    matt.

  • Best Practice for Expired updates cleanup in SCCM 2012 SP1 R2

    Hello,
    I am looking for assistance in finding a best practice method for dealing with expired updates in SCCM SP1 R2. I have read a blog post: http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    I have been led to believe there may be a better method, or a more up to date best practice process in dealing with expired updates.
    On one side I was hoping to keep a software update group intact, to have a history of what was deployed, but also wanting to keep things clean and avoid issues down the road as i used to in 2007 with expired updates.
    Any assistance would be greatly appreciated!
    Thanks,
    Sean

    The best idea is still to remove expired updates from software update groups. The process describes in that post is still how it works. That also means that if you don't remove the expired updates from your software update groups the expired updates will
    still show...
    To automatically remove the expired updates from a software update group, have a look at this script:
    http://www.scconfigmgr.com/2014/11/18/remove-expired-and-superseded-updates-from-a-software-update-group-with-powershell/
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • General Discussion - Best practice to manage Process order

    Hi Experts,
    Which is the best practice to manage process orders ?
    1. Quantity Change - I can make quantity adjustment in R3 and APO.
    2. Source Change - I can make a version change from order header . Also i can make a source change in APO by selecting a different PPM. Which is the best option.
    3. Re Read Master Data - Best practice to read master data is from R3 or APO ?
    I feel for all the above scenarios process ordes should always managed in R3. But still wondering why we have the same flexibility in APO too ?
    Can

    Hello,
    we are just migrating from 4.6c to ECC 6.0 and I have a couple of workflows to adopt.
    For background steps I defined in the corresponding BOR methods an exception to be fired when no result is available (e.g. no mail address available). Normally, I defined them as temporary errors.
    I activated in the WI outcome section the line for this exception and so the workflow processed this branch when the exception appeared. It worked fine.
    Now, in ECC 6.0, the same workflow get stuck in the WI. The exception is fired (I can see it in the log as "Error message"), but the WI is still in status "in process". It doesn't continue with the error outcome branch.
    Is this a new logic in ECC 6.0? Do you have any idea what to do? I used this logic some dozent times in different methods and workflows and it gives me a headache if I have to change everything ...
    Thank you!
    Best regards,
    Thomas

  • Best Practice for Significant Amounts of Data

    This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
    I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
    Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
    This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
    I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
    My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
    In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
    So, I guess this brings up a few questions:
    1)     Am I doing something wrong and did I miss something that would prevent this problem?
    2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
    a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
    b.     Would it even work if it had that many data ranges in it?
    c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
    d.     Other ideas  that Iu2019m missing?
    FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
    Any thoughts or guidance would be appreciated.
    Thanks,
    David

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Best Practice to fetch SQL Server data and Insert into Oracle Tables

    Hello,
    I want to read sqlserver data everry half an hour and write into oracle tables ( in two different databases). What is the best practice for doing this?
    We do not have any database dblinks from oracle to sqlserver and vice versa.
    Any help is highly appreciable?
    Thanks

    Well, that's easy:
    use a TimerTask to do the following every half an hour:
    - open a connection to sql server
    - open two connections to the oracle databases
    - for each row you read from the sql server, do the inserts into the oracle databases
    - commit
    - close all connections

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Data Federator XI 3.1 - best practices

    We are planning of rolling out a data federator setup in our company and I'm looking for some best practices.
    The major question I have is, do we install the data federator server components on a dedicated server or can/should we install the components on one of the machines of our BOE r3 cluster (4 nodes -> 2 mgmt and 2 processing)
    Is there any document that contains a summary of the best practices for setting up a data federator environment.
    Kind regards
    Guy

    Hello,
    the advice is to have a specific machine for the DF server.
    DF can become memory and CPU intensive for large queries so a dedicated machine allows to improve DF performances and avoid negative impacts on other services (e.g.BOE).
    A lot of calculation and temporary storage is done in memory so the advice is to add as much RAM as needed for large queries. If the RAM is not large enough you will have disk swap and hence you'll notice a lower performance.
    Hope that it helps
    Regards
    PPaolo

Maybe you are looking for