Accessing to BSIS in efficient manner

Hi ,
I developed a new GL report for a particular FY period (option for full period too) which access BSIS table - It works fine for a
smaller GL account (which has less no of records) but runs for ever for a big GL account (due to huge volume of records) -
Can some one advice a better way to do the same ?? - My select query looks similar to the below one
  select (bsis_fields) from bsis
    appending corresponding fields of table i_bsis_pkg_rv
                         where bukrs   eq p_bukrs
                         and   hkont      eq p_hkont
                         and   GJAHR    le p_pgjahr
                         and   budat       in s_budat
                         and   blart         in s_blart    "   ( 'RV','ZR')
                         and   (bsis_where).
Thanks in advance.
Regards
Rajesh.

You could try a join of BKPF and BSIS, because BKPF has an index on posting date and one for document type, and BSIS has an index for access by document number. Then hope that the optimizer chooses the best starting table for the access based on G/L account (BSIS) or posting date (BKPF) and the specific selection criteria, whatever provides the better selectivity. In this special case it might be necessary to work with histograms instead of normal table statistics, because even though you have an EQ condition for one G/L account, it could be better to select by BKPF first for a narrow posting date range, based on actual data distribution.
If you select an entire year for a big G/L account -> run in background
Thomas

Similar Messages

  • Place background on each photo in an efficient manner?

    Say I have a bunch of pictures, and I want to place each of them on a certain (bigger) background, is there a way to do this in an efficient manner instead of opening up each photo and manually putting it on the background and then saving them individually?
    So for example, I have a bunch of pictures taken and they're say, 100x100. And I have this background that's say 150x150 and I want to have each photo with the background behind it. Is there any way I can do this efficiently?
    Photoshop CS3 on Windows XP SP3.
    Thank you!

    That's what actions are for. Read up on them.
    Mylenium

  • How do I access an external hard drive attached to an Airport Extreme using the Finder's "Go to the folder" feature?

    How do I access an external hard drive attached to an Airport Extreme using the Finder's "Go to the folder" feature?
    I have an external hard drive attached to my Airport Extreme and I can access it just fine through Finder.  I would like to be able to keyboard navigate to it in an efficient manner.  I tried to use the "Go to the folder" feature but was unable to.
    I had tried dragging the drive to the Network folder but the shortcut it created does not lead to the drive's contents...it seems to lead right back to the parent directory:
    I hope I explained the question well.  Thank you for all of your help and time in tending to my inquiry.

    Cant you set up "back to my mac" then go to finder preferences and put a check mark under everything under the shared location. Also set up back to my mac on your airport extreme.

  • Security and Management of Wireless Access Points

    We have a network of eight (8) Cisco 350 Access Points.
    We would like to enable security through WEP and designating specific MAC (Hardware) addresses.
    Please advise as to the most efficient manner of inputting hardware addresses into all of our access points and managing many access points.

    Hmmm....all these replies, with good information, and no one answered your question!
    You can't cut and paste a list of MACs into a Cisco AP (how come, I don't know). What you need to do is enter one MAC address. Then download a non-default config file out of the AP. Then find the lines that changed, and you have your template for adding MAC address lists in one fell swoop. I made a little excel spreadsheet to let me paste in a list of MACs, then spit out the config file lines that you can add as an "additional configuration file" via the web gui.
    You could also add the list via SNMP.
    There's also an import utility in the cli for the ACS server that will let you suck in MAC addresses.
    Hope this helps.
    Just remembered, the APs for some reason convert the hex format of a MAC into dotted decimal. So, when you paste your list in, you need to convert it from hex to dotted decimal, produce your config lines with those, and then shoot those config lines to the AP. I couldn't find anyone in the TAC that could explain why adding a list of MACs was such a chore.

  • Turning on Full Keyboard Access for PB G4

    I'm a new Mac user with a PB G4 15". I've searched the forums for an answer to this to no avail.
    In The Tao of Mac (http://the.taoofmac.com) the author says: "If you're keyboard-oriented, go into System Preferences | Keyboard and Mouse | Keyboard Preferences and Turn on full keyboard access. Now you can deal with dialog boxes the way you're used to, as well as accessing menus and toolbars with the keyboard."
    When I try this on my PB, there is no "Turn on full keyboard access" button - although the Keyboard Shortcuts tab does have a checkbox for "Turn full keyboard access on or off" with a shortcut of ^F1 (which doesn't seem to do anything).
    I'm very keyboard literate and would like to access menus in the same manner as i can with Windoze - i.e. Alt-underlined letter in menu, e.g. Alt-f brings up file menu.
    Any help most appreciated.
    Regards...john...
    Powerbook G4 15-inch   Mac OS X (10.4.3)  

    This may make-or-break my resolve to switch to Mac.
    While I adore the interface, the graphics power, the
    art/design tools and sensibility of the Mac, it
    really is like "typing with boxing gloves on" as
    someone warned. Makes sense for an OS that
    introduced the GUI and the mouse, I'm still finding
    it difficult to adapt to that style of work. For
    me, taking my hands off the keyboard to use the
    mouse is anathema.
    You have nearly as much keyboard control as on Windows, it's just different. And in some cases, better. For instance, when two menu commands start wit the same letter, the alt key can't be the same for both commands, so Windows assigns a different letter to one of them so you have to learn it. The Mac way can be faster because you don't have to learn the alt key; instead, you just start typing the command name and it's selected as soon as a match is made. For example, if I have two commands, "Slide Show Options" and "Save Slide Show as Movie," I know that there are no alt keys to memorize, as soon as I drop that menu with the keyboard, typing "sl" will select "Slide Show Options" and typing "sa" will select "Save Slide Show as Movie."
    In dialog boxes, be aware that many Mac developers have implemented single-key shortcuts for buttons, which do not require full keyboard access to be turned on. For example, if a "Save changes before closing?" dialog box comes up with "Save and Close," "Don't Save," and "Cancel" buttons, and "Save and Close" is the default, the Enter key will hit the default key, the Esc key will hit the Cancel button, and the D key will hit the Don't Save button. It is also possible, depending on the app, that the S key will hit Save and Close and the C key will hit Cancel.

  • BSI Configuration

    Dear Experts,
    I am new to US payroll and to BSI also, I need to see/check the configuration of TAX for a particular state in BSI system.
    Do i need any application to access the BSI system? or can i see the same from BSI portal.
    Regards
    Sonu

    Hi Sonu,
    This is Kathy from BSI Support.  Depending on what kind of configuration you are talking about, you would need to have the BSI TaxFactory client tool available to you.  There is also informaiton regarding all the authorities and tax types, that BSI supports, available on our website (portal).
    We would welcome the opportunity to assist you.  Please consider that we have training available to explain the ins and outs of our product.  While considering the training, please feel free to contact us at any time, either by phone or email.
    Welcome to the BSI TaxFactory/SAP community!
    Regards,
    BSI Support - Kathy

  • Svi access lists

    Will this access list applied in the manner shown below prevent any traffic from traversing between or visitor network and production?
    I really do not want guest network to be able to access production. There will be many production vlans that are 10.x.x.x something.
    interface Vlan103
    des visitor
     ip address 192.168.2.254 255.255.255.0
     ip access-group no-visit in
    interface vlan 10
    des production vlan
    ip address 10.49.1.0 255.255.255.0
    Extended IP access list no-visit
    10 deny ip 192.168.2.0 0.0.0.255 10.0.0.0 0.255.255.255
    20 permit ip any any

    Yes this should work.

  • Recent MCI to Verizon Residential Service conversion - Terrible

    Hi Folks,
    It is my understanding that Verizon staff members do review the posting here. I hope that this is the case as I would like to share many issues that I have experienced to help improve the process and services offered by Verizon.
    First, ALL of the folks that I spoke with were very pleasant on the phone including someone in India! This is about all the good news that I can offer.
    Background - Happily using MCI residential services over a number of years with two accounts. MCI worked great and performed all services in a clean and efficient manner.
    Forced to leave MCI services and advised via letter that Verizon would pick up in the foot steps of MCI - I also understood that I could select other phone service providers other then Verizon if I choose to. Now I wish I had!!!!!
    Typical MCI services that I had:
    Caller ID
    Call Waiting Caller ID
    Three party Conference Call
    Caller announce
    Speed Dial
    Voice Mail
    Call Waiting, No Answer Transfer to multiple locations with Caller announce (managed by me the user)
    Email Caller ID info (Sent to places I managed i.e. Black Berry - pop mail etc)
    Internet Voice Mail pickup & Play - Store
    Fast Access to Voice Mail from local phone (Dial 00)
    Fast Access to Voice Mail from cell phone
    Easily talk to live person for any variety of issues - Trouble - Billing - Competitive discounts - Promotions
    Initial conversion to MCI (ALL service available and ready the day of conversion!!!!!!!!
    Now the Verizon story and Oh what a story they weave.
    Contacted the conversion Team as per the conversion letter TWICE two accounts.
    Discussed all of the features that I had with MCI and Verizon's features & price (Did this twice - Two accounts)
    Told by Verizon conversion department that 99% of the services are the same - NOT TRUE! Further the services were included in the discounted package (NOT TRUE!)
    No date provided for the conversion
    Call Waiting Caller ID was not provisioned
    Speed Dial is not part of the package - More dollars and wait to have this added after service was turned up
    Speed Dial is crippled and not very useful - No pause control so I can not program access number pause pause pause then account code or access number or pin number etc. Most Verizon folks I talked to have no clue as to what pause is in a speed dial string (this concept has been around for over 20 years - What is with Verizon?
    Three party Conf call is NOT included in package Additional cost and wait to have this added after service was turned up
    Call Waiting, No Answer Transfer to multiple locations (managed by me the user) is not available
    Caller announce not part of package
    Told that Email Caller ID info (Sent to places I managed i.e. Black Berry - pop mail etc) is not available - Found out from friends that have Verizon service that it is available via Verizon Call Assist
    Contacted Verizon one day after orders placed to insure that I would have Verizon Call Assistant - Still waiting a week later - placed over 40 calls to numbers provided by Verizon (pass me on to someone or something else i.e. automated talking systems and 12 hours of on the phone over three days!!!! Most people that I spoke to have no clue what Verizon Call Assistant is, much less getting info or placing order for it - Even though it is FREE and part of service if you like.
    Verizon services turned up randomly over the past week - What happened to Your are now on Verizon and your service are ALL available!
    Additional note - After holding for almost an hour to connect with a live person in the Verizon Call Assist group - I was advised that the email they offer for Voice Mail deposits Only Indicates that you have a voice mail and does not include any Caller ID info - Since this service is still not turned up I am expressing what I was told. This means I am back to the drawing board - I really need to know who called remotly (IE Black Berry etc) whether they left a VM or not. Time to start looking for a new phone company....
    I'm sure I left out a few things here and there.
    My frustration level is beyond belief and it's hard to put into words that are acceptable for a forum. @#$%#$%&$^#%^
    I use Verizon Wireless and have for many years - They have been top notch so I thought the incumbent wireline provider (Verizon) would have its act together. Boy was I wrong.
    I invite any and all responses especially any Verizon folks that really want to do something about these issues - I would be happy to discuss in detail any and all issues presented.
    Ren

    Just a thought about Speed Dial -   even if there were some way to program in a pause I do not see how this would help you in dialing account numbers or pin numbers because the Speed Dial service does not actually produce call tones but simply enables the user to dial a stored phone number by pressing 1 or 2 digits instead of 10.

  • Duplicates in Itunes:

    I recently moves all my songs over to an external drive but couldn't access them from itunes. Thanks to responses from this forum I fixed that problem by moving the entire Itunes folder to the external drive. My problem now is that every single song shows up with a duplicate(all 250 gig) in itunes. Only one one of them gets accessed from the file. The other says it can't find the location.
    My question is: Can I eliminate(delete) all the duplicates showing up in Itunes in an efficient manner?
    Thanks in advance for your help.

    Barry,
    The steps given to you are ones that I would use to find and mass delete 'broken links'. These are iTunes references that do not have a linked underlying music file on the PC. It is not a way to delete true duplicate references that validly point to an underlying song file.
    'Dead Links' may look like duplicates, but when you attempt to play them, you get the '!' indicator within iTunes.
    The theory is that by editing all the song references within iTunes using the BPM field, only the valid references will update that field. You could then sort on that BPM field and mass delete all the songs from the Library that did not contain the edit.
    See: http://discussions.apple.com/message.jspa?messageID=607582
    Also -- Download this: http://ottodestruct.com/iTunes/RemoveDeadTracks.txt
    Rename it to RemoveDeadTracks.js.
    Run iTunes.
    While iTunes is running, double click RemoveDeadTracks.js to run the script. It'll remove all the exclamation marked tracks.
    One step, no fuss, no muss.
    Post back if you have a different problem that the solution above does not address.

  • File Sharing creamed two Macs

    How we creamed two Macs running Leopard, and how we got them back.
    My husband got a new early 2008 MacPro and gave me his iMac. Initially we tried to use the Migration Assistant to move his data, but that hung, and caused us to have to start over. Then we moved over mail and other files, booting the iMac in transfer mode. But, after the majority of the data was moved, I wanted to move the iMac to a different room. So, I had the bright idea to share the Macintosh HD on the iMac, so he could see what applications he used to have, and move over any data files he needed. For some reason, I also decided to share the MacPro HD. The share access was read only.
    Then I read that it was not a good idea to share the HD, as someone coming in over the Internet could gain access. We do have a router, but I like to be safe. So, I took the sharing off the iMac. I unclicked sharing, and removed "Everyone" from the sharing and permissions. I put the iMac to sleep that night, and it woke up fine the next day. I did notice the printer wasn't working, and the scanner, but thought it was something I did not understand about Macs. So, thinking everything was okay, I did the same thing to his machine.
    He came in and immediately noticed some USB devices had been dropped, so rebooted. Or tried to. The MacPro would not reboot. About this time, I tried to run Repair Disk Permissions on the iMac and it hung for over an hour. We called Apple Support on the MacPro. They had us try resetting the pram and several other things and nothing worked. Then they told us to do an archive and restore. This has never worked for us. We had previously tried it many times with the iMac (later zeroed the disk and all was fine from then on). The support person knew we had a disk clone about 4 days old and Time Machine, but did not suggest using them. When the archive and restore failed, we went to Time Machine. We did not do it in the most efficient manner, but did get it done, and everything now appears to be fine, except, of course, we still have file sharing on both the Mac HDs. * And the Apple Support person insisted that changing the file sharing could NOT have caused this problem. Said he had many customers do it. This was the only thing we had done, and for both machines to fail in almost the same way...we are SURE it was the file sharing.
    I also ordered a new external firewire disk drive so that I would have a clone as well as Time Machine. I have also turned on the firewall.
    If someone has had the same problem, and solved it, I could try it on the clone.
    The morals to this story: 1) if you get a new Mac, and need to transfer data, don't be in too big a hurry to give up transfer mode 2) Don't share your HardDisk, or if you do, don't unshare it 3) Have a plan restore plan in place - know how you are going to use Time Machine to recover from a catastrophic failure 4) Use Time Machine, it is wonderful!

    HI Alex are you
    sharing these two MACS over ETHERNET , WIFI, BTooth , FWire or other?
    using the default AFP or possible SMB/CIFS (wondoze fs) to share. You would have had to specifiy this.
    can you log into the users on each of the MACs individually - resolve that you access is correct
    By default if each mac's users  that you need are SHARED theough their individual SYSTEM PREFERENCES/SHARING/ FILE SHARING = ON, then they will appear vua BONJOUR (.local) on the local LAN and in each othes FINDER. You should see it also in the OSX 10.5.8 systems's finder.
    If the above is true, each mac should be able to DROP items in the other mac users on the others HOST's PUBLIC/Drop Box folder in their home directory  (~/Public/Drop Box). There is no need to log into each others user. Simply drop the objects in to the others PUBLIC/Drop Box folder
    This works simply and is also no need to go to any extremes to set up.
    That user can then access it in r/w mode.
    Anyway post back on the above. Lts see if we can resolve it.
    Post your results for others to see.
    Warwick

  • Re: Fwd: Wlnav tool

    <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
    <html>
    <head>
    <meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
    </head>
    <body bgcolor="#ffffff" text="#000000">
    replying to <a class="moz-txt-link-abbreviated" href="mailto:[email protected]">[email protected]</a> so other users can
    also benefit from this discussion<br>
    <br>
    Satya Ghattu wrote:
    <blockquote
    cite="[email protected]"
    type="cite"><br>
    <br>
    ---------- Forwarded message ----------<br>
    <span class="gmail_quote">From: <b class="gmail_sendername">Ben Ames</b>
    <[email protected]
    ><br>
    Date: Sep 1, 2006 5:24 PM<br>
    Subject: Wlnav tool<br>
    To: [email protected]<br>
    <br>
    </span>
    <div>
    <div>
    <p dir="ltr"><span lang="en-us"><font face="Arial" size="2">Hello,</font></span></p>
    <p dir="ltr"><span lang="en-us"><font face="Arial" size="2">We
    recently upgraded to WebLogic 9.2 and have started using the wlnav tool
    to help us monitor our systems. It has been really helpful to us in
    understanding what
    </font></span><span lang="en-us"><font face="Arial" size="2">'</font></span><span
    lang="en-us"><font face="Arial" size="2">s going on with our servers.</font></span><span
    lang="en-us"></span>
    <span lang="en-us"> <font face="Arial" size="2">I have some
    questions, though, about some things we would like to do with the tool.</font></span><span
    lang="en-us"></span></p>
    <p dir="ltr"><span lang="en-us"><font face="Arial" size="2">The first
    is that I know that the data harvested from WebLogic is stored in the
    server logs. </font></span></p>
    </div>
    </div>
    </blockquote>
    Data is stored by WLDF in a file store which is optimized for storage
    and retrieval of information.<br>
    <blockquote
    cite="[email protected]"
    type="cite">
    <div>
    <div>
    <p dir="ltr"><span lang="en-us"><font face="Arial" size="2">I</font></span><span
    lang="en-us"><font face="Arial" size="2">'</font>
    </span><span lang="en-us"><font face="Arial" size="2">ve read that
    the WLDF framework can log this data to a database but is it possible
    for wlnav to connect to a database to get the information logged to it?</font></span></p>
    </div>
    </div>
    </blockquote>
    WlNav uses WLDF accessor to access all the data. Once WLDF is
    configured to persist data to DB, I think accessor should seamlessly be
    able to read data from DB. I'm CC'ing this mail to diagnostics
    newsgroup so someone from Diagnostics team can confirm this behavior
    and provide more pointers.<br>
    <br>
    <blockquote
    cite="[email protected]"
    type="cite">
    <div>
    <div>
    <p dir="ltr"><span lang="en-us"></span></p>
    <p dir="ltr"><span lang="en-us"><font face="Arial" size="2">I also
    like using the summary tab to get all the graphs into one web page that
    I can look at and compare without having to tab through multiple
    screens. However, since graphs
    </font></span><span lang="en-us"> <font face="Arial" size="2">are
    grouped by mBean, it makes it difficult to separate some of the data or
    see data at all. A good example would be UsedHeap and JVMProcessorUsage
    for the Jrockit mbean.
    </font></span><span lang="en-us"><font face="Arial" size="2"> Is
    there a way to separate these graphs on the summary or would this have
    to be added?</font></span></p>
    </div>
    </div>
    </blockquote>
    Currently in summary view it's only possible to use grouping by MBean.
    Can you save one of your summary page (images too) and provide it as a
    zip file, so I can easily visualize the problem.<br>
    <br>
    When we wrote WlNav we wanted to show the power of WLST and WLDF, since
    then Diagnostics team have provided a console UI extension which should
    be able to support your customization need. For more info see
    <a class="moz-txt-link-freetext" href="http://e-docs.bea.com/wls/docs92/wldf_console_ext/index.html">http://e-docs.bea.com/wls/docs92/wldf_console_ext/index.html</a><br>
    <br>
    <blockquote
    cite="[email protected]"
    type="cite">
    <div>
    <div>
    <p dir="ltr"><span lang="en-us"><font face="Arial" size="2">For our
    production systems, we</font></span><span lang="en-us"><font
    face="Arial" size="2">'</font></span><span lang="en-us"><font
    face="Arial" size="2">ve noticed that generating graphs can take five
    to ten minutes depending on the duration and number of servers we
    </font></span><span lang="en-us"><font face="Arial" size="2">'</font></span><span
    lang="en-us"><font face="Arial" size="2">re graphing. A one hour
    duration over eleven servers has taken</font></span><span lang="en-us">
    <font face="Arial" size="2">
    at least five minutes before. Is this normal?</font></span></p>
    </div>
    </div>
    </blockquote>
    The time taken will depend upon your duration and the frequency at
    which the data is being harvested by WLDF.  I've a feeling that the #
    of datapoint in each graph is large. What's fequency have you
    configured your harvester with.<br>
    <br>
    WLDF console extension
    (<a class="moz-txt-link-freetext" href="http://e-docs.bea.com/wls/docs92/wldf_console_ext/index.html">http://e-docs.bea.com/wls/docs92/wldf_console_ext/index.html</a>) provides
    a nice mix of mbean and WLDF monitoring, so for you real time
    monitoring needs it should be able to get data directly from MBeans.<br>
    <br>
    <blockquote
    cite="[email protected]"
    type="cite">
    <div>
    <div>
    <p dir="ltr"><span lang="en-us"></span></p>
    <p dir="ltr"><span lang="en-us"><font face="Arial" size="2">Finally,
    is the logging of the data asynchronous? We would like to poll our
    production servers more frequently than five minutes but we</font></span><span
    lang="en-us">
    <font face="Arial" size="2">'</font></span><span lang="en-us"><font
    face="Arial" size="2">re afraid it</font></span><span lang="en-us"><font
    face="Arial" size="2">'</font></span><span lang="en-us"><font
    face="Arial" size="2">
    ll have ill effects on our system. Right now we</font></span><span
    lang="en-us"><font face="Arial" size="2">'</font></span><span
    lang="en-us"><font face="Arial" size="2">re polling every five minutes
    on eight points of data over eleven servers.</font></span></p>
    </div>
    </div>
    </blockquote>
    WLFD harvester captures data in a very efficient manner and is done in
    a different thread. Thus polling by WlNav (using accessor) and logging
    of data are independent.<br>
    <br>
    --Vishal<br>
    <blockquote
    cite="[email protected]"
    type="cite">
    <div>
    <div>
    <p dir="ltr"><span lang="en-us"><font face="Arial" size="2"></font></span></p>
    <p dir="ltr"><span lang="en-us"></span></p>
    <p dir="ltr"><span lang="en-us"><font face="Arial" size="2">Thanks,</font></span></p>
    <p dir="ltr"><span lang="en-us"><font face="Arial" size="2">Ben</font></span><span
    lang="en-us"></span></p>
    <p dir="ltr"><span lang="en-us"></span><span lang="en-us"></span><a
    name="10d6b44e26acbbea_"><span lang="en-us"><font color="#008080"
    face="Arial" size="2">Ben Ames</font></span></a></p>
    <p dir="ltr"><span lang="en-us"></span><i><span lang="en-us"><font
    color="#008080" face="Arial" size="2">Operations Engineer</font></span></i></p>
    <p dir="ltr"><span lang="en-us"></span><span lang="en-us"><font
    color="#008080" face="Arial" size="2"><a href="http://Benefitfocus.com
    target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">Benefitfocus.com
    </a>, Inc.</font></span></p>
    <p dir="ltr"><span lang="en-us"><font color="#008080" face="Arial"
    size="2">843-849-7476 ext. 240</font></span></p>
    <p dir="ltr"><span lang="en-us"><font color="#008080" face="Arial"
    size="2">843-849-9485 (fax)</font></span><span lang="en-us"></span><span
    lang="en-us"></span></p>
    <p dir="ltr"><span lang="en-us"></span></p>
    <font size="1">
    <p>***************************************************************************************<br>
    <a href="http://BENEFITFOCUS.COM target="_blank"
    onclick="return top.js.OpenExtLink(window,event,this)">BENEFITFOCUS.COM</a>
    CONFIDENTIALITY NOTICE: This electronic message is intended only for
    the individual or entity to which it is addressed and may contain
    information that is confidential and protected by law. Unauthorized
    review, use, disclosure, or dissemination of this communication or its
    contents in any way is prohibited and may be unlawful. If you are not
    the intended recipient or a person responsible for delivering this
    message to an intended recipient, please notify the original sender
    immediately by e-mail or telephone, return the original message to the
    original sender or to <a href="mailto:[email protected]
    target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">[email protected]</a>,
    and destroy all copies or derivations of the original message. Thank
    you. (BFeComNote Rev. 08/01/2005)
    <br>
    ***************************************************************************************</p>
    </font></div>
    </div>
    <br clear="all">
    <br>
    -- <br>
    Satya Ghattu<br>
    978-851-3098
    </blockquote>
    <br>
    </body>
    </html>

    By default, WLDF stores harvester and instrumentation data in the file store. WLDF can be configured to use a data source to store this data in a database. This is transparent to clients such as wlnav, which access the data through the accessor interface.

  • NNTP Input stream / File output stream help

    Hi, please excuse the lengthy post I am hoping to say and include everything i need to first time round.
    I'm trying to read data (files) from a NNTP stream. There are two important things that I had to consider:
    1) The NNTP RFC indicates a character encoding of "ASCII" (or in Java "US-ASCII")
    2) Whenever a new line starts with a double dot (".."), it needs to discard one dot
    The type of files i'm downloading are generally of line length 128 (except for the header and footer information).
    My original battle was that a BufferedReader and FileWriter caused the downloaded file to fail a crc check. After investigation (comparison to a correctly downloaded file) I found that wherever the correct version had a (int) value of 65533 my downloaded file had a (int) value of 63. I came to the conclusion this has something to do with the default Charset being "windows-1252" on a Windows XP machine.
    I managed to fix this problem by reading to and writing from a byte array directly (as is demonstrated in the code below). The situation I am in now, however, is that a double dot on a new line needs to be replaced with a single dot. (This situation was easy to deal with using a BufferedReader as I could simply read the input in lines and use the String.startsWith() method)
    The first choice I'm not sure of consists of 2 options:
    1) I fix the "double-dot" issue during the read from the network stream, somehow scanning for the four successive bytes being (char) "\r\n.." and replace with 3 bytes (char) "\r\n." as I write it to file.
    2) I read the entire file completely, reopen it and scan for the double dot on a new line (which seems to me somewhat redundant, so i'm leaning towards option 1).
    So if I decide to fix this "double-dot" issue during the download, how do I go about detecting a new-line and replacing two dots with a single dot in an efficient manner (keeping in consideration the code below)?
    Another option may be to revert back to a BufferedReader, reading lines at a time and making sure that the (int) value of 65533 is not read as 63. This would include playing around with the Charsets so i'm not sure what to do from there either.
    Any input would be appreciated, many thanks.
    public static void save(Socket connection, String name) throws IOException
         FileOutputStream output;
         BufferedInputStream input;
         try {
              output = new FileOutputStream(name + ".txt");
              input = new BufferedInputStream(connection.getInputStream());
         } catch (FileNotFoundException e) {
              System.err.println("Error creating file.");
         } catch (SecurityException e) {
              System.err.println("You do not have write access to this file.");
         byte[] buffer = new byte[128];
         int bytes;
          try {
              while ( (bytes = input.read( buffer )) != -1 )
                   output.write( buffer, 0, bytes );
         finally {
              input.close();
              output.close();
    }

    Well I didnt write my own CRC checker. The type of articles i'm downloading from NNTP is yEnc, and I used an official yEnc decoder. Basically when I do a byte-for-byte read on a (known) non-faulty download and an equivalent download done by my java program then the data is exactly the same, except a single (byte) -3 in the correct version is written in my program as (byte) 63.
    Ofcourse I could simply replace every -3 byte as 63, but that seems like an ad hoc solution and doesnt really solve the root of the problem.
    I have also tried passing on a different Charset as parameter to InputStreamReader, i.e.
    Input = new BufferedReader( new InputStreamReader(connection.getInputStream(), Charset.forName("US-ASCII") );but this method results in a lot more (different types of) bytes being different than the correct verison.
    Message was edited by:
    jabalsad

  • Would you like help me regarding archiving please .. ?

    Dear All,
    We're in the archiving project ..
    We here have scenario to reload/restore archived data from BW Production to BW Development server ...
    I did try it: I moved the archived file to archived known storage in BW Development, but it didn't recognize it ..
    My questions:
    1. Is it possible to get the scenario be done ??? How can i make it ??
    2. I run t-code AOBJ and found that there is reload program (e.g. of the program: SBOOKL), What is it for ?? Could this program solve my case above ??
    Really need ur guidances all.
    Best regards,
    Niel.

    Data Archiving
    Data Archiving u2013 a service provided by SAP u2013 removes mass data that the system no longer needs online, but which must still be accessible at a later date if required, from the database.
    Data Archiving removes from the database application data from closed business transactions that are no longer relevant for the operational business. The archived data is stored in archive files that can be accessed by the system in read-only mode.
    The following graphic illustrates the archiving process:
    Reasons for Archiving
    There are both technical and legal reasons for archiving application data. Data Archiving:
    Resolves memory space and performance problems caused by large volumes of transaction data
    Ensures that data growth remains moderate so that the database remains manageable in the long term
    Ensures that companies can meet the legal requirements for data storage in a cost-efficient manner
    Ensures that data can be reused at a later date, for example, in new product development
    Data Archiving Requirements
    Data archiving is intended to do more than simply save the contents of database tables. Data archiving must also take the following requirements into consideration:
    Hardware independence
    Release dependence
    Data Dependencies
    Enterprise and business structure
    Optical Archiving
    The term u201Coptical archivingu201D generally describes the electronic storage and management of documents in storage systems outside of the SAP Business environment. Examples of documents that can be stored in this way include:
    Scanned-in original documents, such as incoming invoices
    Outgoing documents, such as invoices created in mySAP Financials that are created electronically, then sent in printed form
    Print lists created in mySAP Business Suite
    Residence Time and Retention Periods
    The residence time is the minimum length of time that data must spend in the database before it meets the archivability criteria. Residence times can be set in application-specific Customizing.
    The retention period is the entire time that data spends in the database before it is archived. The retention period cannot be set.
    Ex: If the residence time is a month, data that has been in the system for two months will be archived. Data that is only three weeks old remains in the database.
    Backup & Restore
    Backup is a copy of the database contents that can be used in the case of a system breakdown. The aim is that as much of the database as possible can be restored to its state before the system breakdown. Backups are usually made at regular intervals according to a standard procedure (complete or incremental backup).
    Reloading the saved data into the file system is called restoring the data.
    Archiving Features
    Data Security
    Data archiving is carried out in two steps (a third step u2013 storage of archive files u2013 is optional: In the first step, the data for archiving is copied to archive files. In the second step, the data is deleted from the database. This two-step process guarantees data security if problems occur during the archiving process.
    For example, the procedure identifies network data transfer errors between the database and the archive file. If an error occurs, you can restart the archiving process at any time because the data is still either in the database or in an archive file. This means that you can usually archive parallel to the online application, that is, during normal system operation, without having to back up the database first.
    You can further increase data security if you store the archive files in an external storage system before you delete the data from the database. This guarantees that the data from the database will only be deleted after it has been securely stored in an external storage system.
    Data Compression
    During archiving, data is automatically compressed by up to a factor of 5. However, if the data to be archived is stored in cluster tables, no additional compression takes place.
    Storage Space Gained
    Increased storage space in the database and the resulting performance gains in the application programs are the most important benefits of data archiving. Therefore it is useful to know how much space the data to be archived takes up in the database. It may also help to know in advance how much space the archive files that you create will need.
    Note: - Data is compressed before it is written to the archive file. The extent of the compression depends on how much text (character fields) the object contains.
    Archiving without Backup
    With SAP Data Archiving, data can be archived independently from general backup operations on the database. However, SAP recommends that you backup archive files before storing them.
    Accessing Archived Data
    Because archived data has only been removed from the database and not from the application component itself, the data is always available. Archive management allows three types of access:
    1.     (Read) access to a single data object, such as an accounting document
    2.     Analysis of an archive file (sequential read)
    3.     Reload into the database (not possible for all archiving objects)
    Converting Old Archive Files
    When archived data is read, the system automatically makes the conversions required by hardware and software changes.
    When old archive files are accessed, the Archive Development Kit (ADK) can make allowances for changes to database structures (field types, field lengths, new fields, and deleted fields) after the data was archived and for changes to hardware-dependent storage formats. This is only done on a temporary basis during read access. The data in the archive file is not changed. The following items are changed (if necessary) during automatic conversion:
    Database table schema (new and deleted columns)
    Data type of a column
    Column length
    Code page (ASCII, EBCDIC)
    Number format (such as the use of the integer format on various hardware platforms)
    If database structures in an application have undergone more changes than the ADK can handle (for example, if fields have been moved from one table to another or if one table has been divided into several separate tables), then a program is usually provided by the relevant mySAP Business Suite solution for the permanent conversion of existing archive files.
    Link to External Storage System
    Archive files created by Data Archiving can be stored on tertiary storage media, such as WORMs, magnetic-optical disks (MO), and tapes using the SAP Content Management Infrastructure (which also contains the ArchiveLink/CMS interface). This can be done manually or automatically.
    You can also store archive files in the file system of an HSM system. The HSM system manages the archive files automatically. For storage, the HSM system can also use tertiary storage media, such as MO-disks.
    CMI/R u2013 Content Management Infrastructure / Repository
    HSM u2013 Hierarchical Storage Management Systems
    Archiving Procédure
    The basic Archiving procedure is carried out in three steps, 
    Creating the Archive Files
    Storing Archive Files
    Executing the Delete Programs 
    Security Vs Performance
    Optionally, you can store archive files after the delete phase. To do this, you must mark Delete Phase Before Storage in archiving object-specific Customizing.
    If security is your main concern, then you should not schedule the delete phase until after the archive files have been stored. In this way you know that the data will only be deleted from the database after the archive files have successfully been moved to the external storage system. In addition, you can set the system to read the data from the storage system and not from the file system.
    However, if your main concern is the performance of the archiving programs, then you should schedule the delete program first and then store the files.
    Creating Archive Files (WRITE)
    In step one, the write program creates an archive file. The data to be archived is then read from the database and is written to the archive file in the background. This process continues until one of following three events occurs:
    All the data is written to an archive file
    Archiving is not complete, but the archive file reaches the maximum size specified in archiving object-specific Customizing
    The archiving is not yet finished, but the archive file contains the maximum number of data objects specified in Customizing.
    If in cases 2 and 3 there is still data to be archived, the system will create another archiving file.
    Storing Archive Files (STORE)
    Once the write program has finished creating archive files, these can be stored. There are several ways of storing archive files:
    Storage Systems:
    If a storage system is connected to mySAP Business Suite: At the end of a successful write job, a request is sent to this system to store the new archive files (provided the appropriate settings were made in Archiving Object-Specific Customizing. You can also store archive files manually at a later point if you do not want them to be stored automatically. Storage is carried out by the SAP Content Management Infrastructure (which contains the ArchiveLink/CMS interface).
    HSM Systems:
    If you use an HSM system, it is sufficient to maintain the file name in Customizing (Transaction FILE). You do not then need to communicate with the storage system using the SAP Content Management Infrastructure, because the HSM system stores the files on suitable storage media according to access frequency and storage space.
    Existing Storage Media:
    Once the delete program has processed the relevant archive file, you can manually copy archive files to tape.
    Running Delete Programs
    After closing the first archive file, the archive management system creates a new archive file and continues with the archiving process. While this happens, another program reads the archived data from the completed archive file and deletes it from the database. This procedure guarantees that only data that has been correctly saved in the archive file is deleted from the database.
    If you do not carry out deletion until after the data has been stored, you can make a setting in Archiving Object-Specific Customizing so that the system will read archive files the from the storage system during deletion. In this way, you can detect errors in good time which might arise when transferring or saving the archive files in the storage system.
    When the last archive file is closed, a delete program starts to run for this file. The graphic shows that several delete programs are running simultaneously for previously created archive files. Because, unlike the delete program, the write program does not generally carry out any transactions that change data in the transactions, the write program creates new archive files faster than they can be processed by the delete program. This decreases the total archiving runtime because the database is used more efficiently.
    Note:-
    Scheduling the Archive jobs outside SARA
    WRITE:-
    Using an external job scheduler (SM36, SM62)
    WRITE Run followed by EVENT - SAP_ARCHIVING_WRITE_FINISHED,
    Parameter is Session Number
    To analyze the archiving information of a particular session, use FM
         ARCHIVE_GET_FILES_OF_SESSION
         Input is Session Number
    DELETE:-
    Using an external job scheduler (SM36, SM62)
    Using program RSARCHD, input u2013 Obj Name, Max. no. of files, Max no. of sessions, Max no. of jobs,
              Background User
    DELETE run followed by EVENT - SAP_ARCHIVING_DELETE_FINISHED
         Parameter is Session Number
    To analyze the archiving information of a particular session, use FM
         ARCHIVE_GET_FILES_OF_SESSION
         Input is Session Number
    Archiving Object
    The archiving object is a central component of SAP Data Archiving. The archiving object specifies precisely which data is archived and how. It describes which database objects must be handled together as a single business object and interprets the data irrespective of the technical specifications at the time of archiving (such as release and hardware).
    Note:-
         An archiving object has a name of up to ten characters in length.
         Transaction code to maintain the Archiving Object is AOBJ.
    The following programs must (or can) be assigned to an archiving object. The SAP System contains programs (some of which are optional) for the following actions:
    Preprocessing (Optional)
    Some archiving objects require a preprocessing program that prepares the data for archiving. This preprocessing program marks data to be archived, but it does not delete any data from the database. Preprocessing programs must always be scheduled manually and are run from Archive Administration.
    Write
    This program creates archive files and writes data to them. At this point, however, no data is being deleted from the database.
    You can specify in archiving object-specific Customizing whether the next phase (delete) is to take place automatically after the archive files have been created. Delete jobs can also be event-triggered. To do this, you set up the trigger event in archiving object-specific Customizing.
    Delete
    This function can entail several activities. The activities are always dependent on the existing archive files. Normally, the data is deleted from the data base. However, in some case, the archived data in the database may only have a delete indicator.
    In archiving object-specific Customizing, you can specify that archive files, after successful processing, are to be transferred to an external storage system using the SAP Content Management Infrastructure (which contains the ArchiveLink/CSM interface).
    Postprocessing (Optional)
    This function is usually carried out after deletion has taken place. It is not available for all archiving objects. If the data has not yet been deleted from the database by the delete program, it is deleted by the postprocessing program.
    Reload Archive (Optional)
    You can reload archived data from the archive files into the database using this function. It is not available for all archiving objects. To access this function, choose Goto ® Reload.
    Index (Optional)
    This function builds (or deletes) an index that allows individual access. It is not included in every archiving object.
    Data Object
    A data object is the application-specific instance of an archiving object, that is, an archiving object filled with concrete application data. The Archive Development Kit (ADK) ensures that data objects are written sequentially to an archive file. All data objects in an archive file have the same structure, which is described in the archiving object.
    Archive Administration (SARA)
    All interaction relating to data archiving takes place in the Archive Administration (transaction SARA). Features of Archive Administration:
    Preprocessing
         Write
         Delete
         Postprocessing
         Read - Enables you to schedule and run a program that reads and analyzes archived data.
         Index
         Storage System - Enables archive files to be transferred to a connected storage system and  
                                             enables stored archive files to be retrieved from a storage system. 
         Management - Offers an overview of archiving sessions for one archiving object.
    Depending on the action you have selected, you can use Goto on the menu to access the following menu options:
         Network Graphic
         Reload
         Customizing
         Job Overview
         Management
         Stored Files
         Database Tables
         Infosystems
         Statistics
         Interrupting and Continuing
    Archive Development Kit
    The Archive Development Kit (ADK) is a tool for developing archiving solutions. It also prepares the runtime environment for archiving. From a technical viewpoint, it is an intermediate layer between the application program and the archive that provides all the functions required for archiving data.
    The ADK functions are required for archiving and for subsequent access to archived data. The ADK automatically performs the hardware-dependent adjustments (such as code page and number format) and structural changes that are required when archive files are created. When the archive files are accessed later, the ADK temporarily converts data that was archived using earlier SAP releases.
    Note:-
    S_ARCHIVE is the SAP delivered user authorization check object over archiving objects. The Archive Development Kit (ADK) performs the check when an archive file is opened for one of the following actions:
    Write
    Delete
    Read
    Reload
    Database Tables in Archive Administration (DB15)
    This enables you to display all of the tables for a specific archiving object and as well as allows to display the list of Archiving Objects on a particular database table. Also enables to display the storage and space statistics. This also provides further information, such as the time and number of the last archiving session and various details on the client used.
    Network Graphic
    You can use the network graphic to show any existing dependencies between archiving objects. It shows business process flows and contexts that can influence the archiving sequence. In particular, at the start of an archiving object, you can use the network graphic to obtain a good overview of related documents.
    In an archiving session, you must take into account any dependencies between archiving objects that require a specific archiving sequence. In general, you cannot archive data for an archiving object that has preceding objects until these preceding objects have been archived.
    You can use the network graphic to determine whether the archiving object that you want to use has preceding objects. If so, the preceding objects should be implemented before the current archiving object. The nodes in the network graphic represent the archiving objects. A node displays the following information:
    Archive Object Name
    Application Component Name
    Short Description
    Date of last archiving
    Status of the session
         If status is u2018Greenu2019
         Archiving and Deletion is Successful
         If status is u2018Yellowu2019
         Successfully archived, but not yet deleted, or Archiving still running, or Delete in progress or
    Delete Cancelled
    If status is u2018Redu2019
    Not yet archived, or Archiving cancelled
    Standard Log (Spool List)
    During archiving, a log is usually generated. This can be done during the write, delete, read, or reload phases. This is usually in the form of a standard log. In some cases, an application-specific log may be generated.  Depending on the archiving action that was carried out, the standard log contains statistical information per archiving session or archive file according to the following categories:
    Archiving session number
    Number of data objects for processing
    Archive session size in megabytes
    Total header data in %
    Table space in MB occupied for:
              Tables
              Indexes
    Number of table entries processed
    You can call the standard log from the screen Archive Administration: Overview of Archiving Sessions. Choose Spool List.
    Accessing Archived Data
    Data that was archived using SAP Data Archiving has been relocated from the database but not placed beyond the application. Data is still available for read access and analysis. In some cases, archived data can even be reloaded into the database.
    Note:-
    A prerequisite of read access and reload access, is that the file can be found in the file system.
    Three types of access are possible:
    (Read) access to a single data object, such as an accounting document
    Direct access or single document access requires an index that can be built either during archiving or at a later point. A complex search of the documents stored in the archive files, in which all orders of an article in a particular batch are required for a product recall action, is not possible.
    The Archive Information System (AS) supports direct access using archive information structures that can be generated automatically either when the archive files are being written, or at a later point.
    Analysis of an archive file (sequential read)
    It is possible to run an analysis for one or several archiving sessions. The results of the analyzed data objects are displayed in a list. Furthermore, some archiving objects offer the option of a combined analysis. With this option, you can link current data in the database and archived data.
    Reloading into the database
    Archived data does not usually need to be reloaded because it remains accessible by the applications. There is also a lot of data that cannot be reloaded or for which reloading is problematic. For this reason, reload programs do not exist for all archiving objects.
    Archiving Session Overview
    On this screen, you can display and edit management information on archiving sessions. One archiving session is equal to Write and Delete jobs.  Within a status area, archiving sessions are, by default, organized in groups of 20. The sessions are ordered according to different status.
    Interrupting and Continuing
    In order that Data Archiving can be seemlessly integrated into the production system, you can interrupt an archiving session during the write phase and contine it at a later time. This enables you to react, during archiving, to specific time constraints or hard-disk space shortages. You can continue and complete interrupted archiving sessions when you have more time or more storage space. 
    To interrupt an archiving session:
    The archiving object must be registered in transaction AOBJ as interruptible, otherwise the Archive Development Kit (ADK) is unable to inform the write program of the interruption request.
    The write program must be able to process the interruption request.
    The archiving session must be run in production mode (not test mode) and be in process.
    The delete phase must be able to start before the write phase has finished (setting in transaction AOBJ).
    To continue an archiving session:
    The session must have been interrupted within the context of the interrupution concept. Archiving sessions that were interrupted for other reasons or that were terminated by archive management cannot be continued.
    The delete phase must have completed for the data that was archivied up to the point of interruption, that is, the archiving session must have the status completed.
    Database Action Before and After Archiving
    Archiving uses application software that depends on and affects the organization of the database data. You should therefore organize the database before and after archiving.
    Before Archiving
    Archiving application data helps to prevent storage and performance bottlenecks. Since relocating data can, in some circumstances, in itself, impair performance u2013 this is the case if you need to access archived data u2013 you need to consider carefully what data to archive. To determine whether or not you should archive data, consider the following questions:
    If there are memory problems, can more memory be assigned to the table?
    How likely is it that you will need to access the archived data again? How often?
    Is the data accessed using an optimal index?
    Does the application perform a full table scan on the tables that contain the data to be archived?
    After Archiving
    Reorganize index: If data has been archived or simply deleted and the associated tables were accessed via an index, the index should be reorganized. Deleting table entries leaves holes in the table which are still indexed. Reorganization can shorten the access paths, reducing response times.
    Update the database statistics: If your database uses a cost-based optimizer, you must choose Update Statistics to recalculate the access paths.
    Reorganize tablespace or database space: Whether you should reorganize the tablespace depends on the reason for archiving.
    Do you expect a lot of new data for the archived tables?
    Do you want to make space for other tables?
    Note:-
    Reorganization takes a long time and may need to be repeated after archiving. Throughput during a reorganization:
         With export/import     :           approximately. 60-100 MByte/hour.
           With unload/load     :           approximately. 250-300 MByte/hour.
    Perform an SQL Trace after reorganization.
    Statistics
    When writing, deleting, reading, or reloading, statistical data on each archiving run is automatically generated and is persistently stored in the database The data archiving administrators can analyze these figures so that they can better plan future archiving projects and request the necessary resources. Statistics also provided pertinent information on the role of data archiving in reducing the data volume in the database.
    You can call this screen directly from the Archive Administration (SARA), or using the transaction SAR_DA_STAT_ANALYSIS. It displays the following information:
         Archiving Session Number
         Archiving Object Name
         Client ID on which the archiving session was carried out
         Date on which the archiving session was carried out
         Status of the session number
         Portion of the Header data in the archiving session
         DB Space (WRITE) u2013 Virtual storage space in MB, which is occupied by an incomplete archiving                          session in the database
         DB Storage Space (DELETE) - Virtual storage space in MB, which is occupied by an incomplete                               archiving session in the database
         DB Space (Reload) u2013 Virtual storage space in MB
         Written Data Objects in an incomplete archiving session
         Deleted Data Objects for an incomplete archiving session in database
         Reloaded Data Objects
         Number of delete jobs
         Write job duration
         Delete job duration
         Reload job duration
    Logical Path and File
    Archive files are stored in the file system under a physical path and file name that is derived from a user-definable logical path or file name. The definition can be divided into the following steps:
    Definition of the logical path name
    Definition of the logical file name
    Assignment of the logical fine name for the archiving object
    By default, the system uses the logical file name ARCHIVE_DATA_FILE and the logical path name ARCHIVE GLOBAL PATH as defaults. Consequently, the names only need to be changed if they have to be adjusted to meet special requirements.
    Data Archiving Monitor
    Use this indicator to activate or deactivate the data archiving monitor (transaction SAR_SHOW_MONITOR. If you mark this checkbox before data archiving, archiving-relevant information on the write and delete jobs is updated. This information can be analyzed using the data archiving monitor. If there are errors, alerts are issued.
    The data archiving monitor offers the following information:
    Overview of all the archiving objects that have been run
    Detailed information on the individual archiving sessions
    Processing status display
    Help on analyzing open alerts

  • Intalling Postgresql in solaris 10

    I have downloaded the postgresql package from
    www.postgresql.org/download/bittorent
    i have unziped the files. i dont know how to continue with the installation.

    Here is some documentation to get you started......It available online.
    Author : Chris Drawater
    Date
    : May 2005
    Version : 1.2
    PostgreSQL 8.0.02 for J2EE applications on Solaris 10
    Abstract
    Advance planning enables PostgreSQL 8 and its associated JDBC driver to be quickly deployed in a
    basic but resilient and IO efficient manner.
    Minimal change is required to switch JDBC applications from Oracle to PostgreSQL.
    Document Status
    This document is Copyright � 2005 by Chris Drawater.
    This document is freely distributable under the license terms of the GNU Free Documentation License
    (http://www.gnu.org/copyleft/fdl.html). It is provided for educational purposes only and is NOT
    supported.
    Introduction
    This paper documents how to deploy PostgreSQL 8 and its associated JDBC driver in a basic but both
    resilient and IO efficient manner. Guidance for switching from Oracle to PostgreSQL is also provided.
    It is based upon experience with the following configurations =>
    PostgreSQL 8.0.2 on Solaris 10
    PostgreSQL JDBC driver on Windows 2000
    using the PostgreSQL distributions =>
    postgresql-base-8.0.2.tar.gz
    postgresql-8.0-311.jdbc3.jar
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p1/10
    Page 2
    Background for Oracle DBAs
    For DBAs coming from an Oracle background, PostgreSQL has a number of familiar concepts including
    Checkpoints
    Tablespaces
    MVCC concurrency model
    Write ahead log (WAL)+ PITR
    Background DB writer
    Statistics based optimizer
    Recovery = Backup + archived WALs + current WALs
    However , whereas 1 Oracle instance (set of processes) services 1 physical database, PostgreSQL differs in
    that
    1 PostgreSQL �cluster� services n * physical DBs
    1 cluster has tablespaces (accessible to all DBs)
    1 cluster = 1 PostgreSQL instance = set of server processes etc ( for all DBs) + 1 tuning config +
    1 WAL
    User accts are cluster wide by default
    There is no undo or BI file � so to support MVCC, the �consistent read� data is held in the tables
    themselves and once obsolete needs to be cleansed out using the �vacuum� utility.
    The basic PostgreSQL deployment guidelines for Oracle aware DBAs are to =>
    Create only 1 DB per cluster
    Have 1 superuser per cluster
    Let only the superuser create the database
    Have one user to create/own the DB objects + n* endusers with appropriate read/write access
    Use only ANSI SQL datatypes and DDL.
    Wherever possible avoid DB specific SQL extensions to ensure cross-database portability
    IO distribution & disc layouts
    It is far better to start out with good disc layouts rather than reto-fix for a production database.
    As with any DBMS, for resilience, the recovery components ( eg. backups , WAL, archived WAL logs)
    should kept on devices separate from the actual data.
    So the basic rules for resilience are as follows.
    For non disc array or JBOD systems =>
    keep recovery components separate from data on dedicated discs etc
    keep WAL and data on separate disc controllers
    mirror WAL across discs ( preferably across controllers) for protection against WAL spindle loss
    For SAN based disc arrays (eg HP XP12000) =>
    keep recovery components separate from data on dedicated LUNs etc
    use Host Adapter Multipathing drivers (such as mpxio) with 2 or more HBAs for access to SAN .
    Deploy application data on mirrored/striped (ie RAID 1+0) or write-cache fronted RAID 5 storage.
    The WAL log IO should be configured to be osync for resilience (see basic tuning in later section).
    Ensure that every PostgreSQL component on disc is resilient (duplexed) !
    Recovery can be very stressful�
    Moving onto IO performance, it is worth noting that WAL IO and general data IO access have different IO
    characteristics.
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p2/10
    Page 3
    WAL sequential access (write mostly)
    Data sequential scan, random access write/read
    The basic rules for good IO performance �.
    use tablespaces to distribute data and thus IO across spindles or disc array LUNs
    keep WAL on dedicated spindles/LUNs (mirror/stripe in preference to RAID 5)
    keep WAL and arch WAL on separate spindles to reduce IO on WAL spindles.
    RAID or stripe data across discs/LUNs in 1 Mb chunks/units if unsure as what chunk size to use.
    For manageability, keep the software distr and binaries separate from the database objects.
    Likewise, keep the system catalogs and non-application data separate from the application specific data.
    5 distinct storage requirements can be identified =>
    Software tree (Binaries, Source, distr)
    Shared PG sys data
    WAL logs
    Arch WAL logs
    Application data
    For the purposes of this document , the following minimal set of FS are suggested =>
    /opt/postgresql/8.0.2
    # default 4 Gb for software tree
    /var/opt/postgresql
    # default 100 Mb
    /var/opt/postgresql/CLUST/sys
    # default size 1Gb for shared sys data
    /var/opt/postgresql/CLUST/wal
    # WAL location # mirrored/striped
    /var/opt/postgresql/CLUST/archwal
    # archived WALs
    /var/opt/postgresql/CLUST/data
    # application data + DB sys catalogs # RAID 5
    where CLUST is your chosen name for the Postgres DB cluster
    For enhanced IO distribution , a number of �/data FS (eg data01, data02 etc) could be deployed.
    Pre-requisites !
    The GNU compiler and make software utilities (available on the Solaris 10 installation CDs) =>
    gcc (compiler) ( $ gcc --version => 3.4.3 )
    gmake (GNU make)
    are required and should be found in
    /usr/sfw/bin
    Create the Unix acct
    postgres
    in group dba
    with a home directory of say /export/home/postgresql
    using
    $ useradd utility
    or hack
    /etc/group then /etc/passwd then run pwconv and then passwd postgres
    Assuming the following FS have been created =>
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p3/10
    Page 4
    /opt/postgresql/8.0.2
    # default 4 Gb for the PostgreSQL software tree
    /var/opt/postgresql
    # default 100 Mb
    create directories
    /opt/postgresql/8.0.2/source
    # source code
    /opt/postgresql/8.0.2/distr
    # downloaded distribution
    all owned by user postgres:dba with 700 permissions
    To ensure, there are enough IPC resources to use PostgreSQL, edit /etc/system and add the following lines
    =>
    set shmsys:shminfo_shmmax=1300000000
    set shmsys:shminfo_shmmin=1
    set shmsys:shminfo_shmmni=200
    set shmsys:shminfo_shmseg=20
    set semsys:seminfo_semmns=800
    set semsys:seminfo_semmni=70
    set semsys:seminfo_semmsl=270 # defaults to 25
    set rlim_fd_cur=1024
    # per process file descriptor soft limit
    set rlim_fd_max=4096
    # per process file descriptor hard limit
    Thenn on the console (log in as root) =>
    $ init 0
    {a} ok boot -r
    Download Source
    Download the source codes from http://www.postgresql.org (and if downloaded via Windows, remember
    to ftp in binary mode) =>
    Distributions often available include =>
    postgresql-XXX.tar.gz => full source distribution.
    postgresql-base-XXX.tar.gz => Server and the essential client interfaces
    postgresql-opt-XXX.tar.gz => C++, JDBC, ODBC, Perl, Python, and Tcl interfaces, as well as multibyte
    support
    postgresql-docs-XXX.tar.gz => html docs
    postgresql-test-XXX.tar.gz => regression test
    For a working, basic PostgreSQL installation supporting JDBC applications, simply use the �base�
    distribution.
    Create Binaries
    Unpack Source =>
    $ cd /opt/postgresql/8.0.2/distr
    $ gunzip postgresql-base-8.0.2.tar.gz
    $ cd /opt/postgresql/8.0.2/source
    $ tar -xvof /opt/postgresql/8.0.2/distr/postgresql-base-8.0.2.tar
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p4/10
    Page 5
    Set Unix environment =>
    TMPDIR=/tmp
    PATH=/usr/bin:/usr/ucb:/etc:.:/usr/sfw/bin:usr/local/bin:n:/usr/ccs/bin:$PATH
    export PATH TMPDIR
    Configure the build options =>
    $ cd /opt/postgresql/8.0.2/source/postgresql-8.0.2
    $ ./configure prefix=/opt/postgresql/8.0.2 with-pgport=5432 --without-readline
    CC=/usr/sfw/bin/gcc
    Note => --enable-thread-safety option failed
    And build =>
    $ gmake
    $ gmake install
    On an Ultra 5 workstation, this gives 32 bit executables
    Setup Unix environment
    Add to environment =>
    LD_LIBRARY_PATH=/opt/postgresql/8.0.2/lib
    PATH=/opt/postgresql/8.0.2/bin:$PATH
    export PATH LD_LIBRARY_PATH
    Create Database(Catalog) Cluster
    Add to Unix environment =>
    PGDATA=/var/opt/postgresql/CLUST/sys
    # PG sys data , used by all DBs
    export PGDATA
    Assuming the following FS has been created =>
    /var/opt/postgresql/CLUST/sys
    # default size 1Gb
    where CLUST is your chosen name for the Postgres DB cluster,
    initialize database storage area, create shared catalogs and template database template1 =>
    $ initdb -E UNICODE -A password
    -W
    # DBs have default Unicode char set, user basic passwords, prompt for super user password
    Startup, Shutdown and basic tuning of servers
    Check servers start/shutdown =>
    $ pg_ctl start -l /tmp/logfile
    $ pg_ctl stop
    Next, tune the PostgreSQL instance by editing the configuration file $PGDATA/postgresql.conf .
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p5/10
    Page 6
    First take a safety copy =>
    $ cd $PGDATA
    $ cp postgresql.conf postgresql.conf.orig
    then make the following (or similar changes) to postgresql.conf =>
    # listener
    listen_addresses = 'localhost'
    port = 5432
    # data buffer cache
    shared_buffers = 10000
    # each 8Kb so depends upon memory available
    #checkpoints
    checkpoint_segments = 3
    # default
    checkpoint_timeout = 300
    # default
    checkpoint_warning = 30
    # default � logs warning if ckpt interval < 30s
    # log related
    fsync = true
    # resilience
    wal_sync_method = open_sync
    # resilience
    commit_delay = 10
    # group commit if works
    archive_command = 'cp "%p" /var/opt/postgresql/CLUST/archwal/"%f"'
    # server error log
    log_line_prefix = '%t :'
    # timestamp
    log_min_duration_statement = 1000
    # log any SQL taking more than 1000ms
    log_min_messages = info
    #transaction/locks
    default_transaction_isolation = 'read committed'
    Restart the servers =>
    $ pg_ctl start -l /tmp/logfile
    Create the Database
    This requires the FS =>
    /var/opt/postgresql/CLUST/wal
    # WAL location
    /var/opt/postgresql/CLUST/archwal
    # archived WALs
    /var/opt/postgresql/CLUST/data
    # application data + DB sys catalogs
    plus maybe also =>
    /var/opt/postgresql/CLUST/backup
    # optional for data and config files etc as staging
    area for tape
    Create the clusterwide tablespaces (in this example, a single tablespace named �appdata�) =>
    $ psql template1
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p6/10
    Page 7
    template1=# CREATE TABLESPACE appdata LOCATION '/var/opt/postgresql/CLUST/data';
    template1=# SELECT spcname FROM pg_tablespace;
    spcname
    pg_default
    pg_global
    appdata
    (3 rows)
    and add to the server config =>
    default_tablespace = 'appdata'
    Next, create the database itself (eg name = db9, unicode char set) =>
    $ createdb -D appdata -E UNICODE -e db9
    # appdata = default TABLESPACE
    $ createlang -d db9 plpgsql
    # install 'Oracle PL/SQL like' language
    WAL logs are stored in the directory pg_xlog under the data directory. Shut the server down & move the
    directory pg_xlog to /var/opt/postgresql/CLUST/wal and create a symbolic link from the original location in
    the main data directory to the new path.
    $ pg_ctl stop
    $ cd $PGDATA
    $ mv pg_xlog /var/opt/postgresql/CLUST/wal
    $ ls /var/opt/postgresql/CLUST/wal
    $ ln -s /var/opt/postgresql/CLUST/wal/pg_xlog $PGDATA/pg_xlog
    # soft link as across FS
    $ pg_ctl start -l /tmp/logfile
    Assuming all is now working OK, shutdown PostgreSQL & backup up all the PostgreSQL related FS
    above� just in case�!
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p7/10
    Page 8
    User Accounts
    Create 1 * power user to create/own/control the tables (using psql) =>
    $ pgsql template1
    create user cxd with password 'abc';
    grant create on tablespace appdata to cxd;
    Do not create any more superusers or users that can create databases!
    Now create n* enduser accts to work against the data =>
    $pgsql template1
    CREATE GROUP endusers;
    create user enduser1 with password 'xyz';
    ALTER GROUP endusers ADD USER enduser1;
    $ psql db9 cxd
    grant select. on <table>. to group endusers;
    JDBC driver
    A pure Java (Type 4) JDBC driver implementation can be downloaded from
    http://jdbc.postgresql.org/
    Assuming the use of the SDK 1.4 or 1.5, download
    postgresql-8.0-311.jdbc3.jar
    and include this in your application CLASSPATH.
    (If moving JAR files between different hardware types, always ftp in BIN mode).
    Configure PostgreSQL to accept JDBC Connections
    To allow the postmaster listener to accept TCP/IP connections from client nodes running the JDBC
    applications, edit the server configuration file and change
    listen_addresses = '*'
    # * = any IP interface
    Alternatively, this parameter can specify only selected IP interfaces ( see documentation).
    In addition, the client authetication file will need to edited to allow access to our database server.
    First take a backup of the file =>
    $ cp pg_hba.conf pg_hba.conf.orig
    Add the following line =>
    host db9
    cxd
    0.0.0.0/0
    password
    where , for this example, database db9, user cxd, auth password
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p8/10
    Page 9
    Switching JDBC applications from Oracle to PostgreSQL
    The URL used to connect to the PostgreSQL server should be of the form
    jdbc:postgresql://host:port/database
    If used, replace the line (used to load the JDBC driver)
    Class.forName ("oracle.jdbc.driver.OracleDriver");
    with
    Class.forName("org.postgresql.Driver");
    Remove any Oracle JDBC extensions, such as
    ((OracleConnection)con2).setDefaultRowPrefetch(50);
    Instead, the row pre-fetch must be specified at an individual Statement level =>
    eg.
    PreparedStatement pi = con1.prepareStatement(�select�.�);
    pi.setFetchSize(50);
    If not set, the default fetch size = 0;
    Likewise, any non ANSI SQL extensions will need changing.
    For example sequence numbers
    Oracle => online_id.nextval
    should be replaced by
    PostgreSQL => nextval('online_id')
    Oracle �hints� embedded within SQL statements are ignored by PostgreSQL.
    Now test your application!
    Concluding Remarks
    At this stage, you should now have a working PostgreSQL database fronted by a JDBC based application,
    and the foundations will have been laid for :
    A reasonably level of resilience (recoverability)
    A good starting IO distribution
    The next step is to tune the system under load� and that�s another doc�
    Chris Drawater has been working with RDBMSs since 1987 and the JDBC API since late 1996, and can
    be contacted at [email protected] or [email protected] .
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p9/10
    Page 10
    Appendix 1 � Example .profile
    TMPDIR=/tmp
    export TMPDIR
    PATH=/usr/bin:/usr/ucb:/etc:.:/usr/sfw/bin:usr/local/bin:n:/usr/ccs/bin:$PATH
    export PATH
    # PostgreSQL 802 runtime
    LD_LIBRARY_PATH=/opt/postgresql/8.0.2/lib
    PATH=/opt/postgresql/8.0.2/bin:$PATH
    export PATH LD_LIBRARY_PATH
    PGDATA=/var/opt/postgresql/CLUST/sys
    export PGDATA
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p10/10

  • Selection part of Logical Database

    I read about logical database somewhere on the net but it was not so clear. Can anyone explain to me about the selection part of the logical database???

    Logical databases are special ABAP programs that retrieve data and make it available to application programs. The most common use of logical databases is still to read data from database tables and linking them to executable ABAP programs while setting the program contents. You edit logical databases using the Logical Database Builder in the ABAP Workbench.
    However, since Release 4.5A, it has also been possible to call logical databases independently of this tool using the function module LDB_PROCESS. This allows you to call several logical databases from any ABAP program, nested in any way. It is also possible to call a logical database more than once in a program, if it has been programmed to allow this. This is particularly useful for executable programs, allowing them to use more than one logical database and process a database more than once.
    Logical databases contain Open SQL statements that read data from the database. You do not therefore need to use SQL in your own programs. The logical database reads the program, stores them in the program if necessary, and then passes them line by line to the application program or the function module LDB_PROCESS using an interface work area.
    http://help.sap.com/saphelp_nw2004s/helpdata/en/9f/db9b5e35c111d1829f0000e829fbfe/content.htm
    A logical database is a special ABAP/4 program which combines the contents of certain database tables. You can link a logical database to an ABAP/4 report program as an attribute. The logical database then supplies the report program with a set of hierarchically structured table lines which can be taken from different database tables.
    GO THROUGH LINKS -
    http://www.sap-basis-abap.com/saptab.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9bfa35c111d1829f0000e829fbfe/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9b5e35c111d1829f0000e829fbfe/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/c6/8a15381b80436ce10000009b38f8cf/frameset.htm
    /people/srivijaya.gutala/blog/2007/03/05/why-not-logical-databases
    Re: **LDB**
    www.sapbrain.com/FAQs/TECHNICAL/SAP_ABAP_Logical_Database_FAQ.html
    www.sap-img.com/abap/abap-interview-question.htm
    www.sap-img.com/abap/quick-note-on-design-of-secondary-database-indexes-and-logical-databases.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9b5e35c111d1829f0000e829fbfe/content.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/9f/db9bb935c111d1829f0000e829fbfe/content.htm
    Logical databases are SAP standard programs that are desinged with selections screens that helps easy retrival and access of data from various tables in and efficient manner.
    You can get to know more about the logical databases from the SAP documentation or the following links.
    A logical database is a special ABAP/4 program which combines the contents of certain database tables. You can link a logical database to an ABAP/4 report program as an attribute. The logical database then supplies the report program with a set of hierarchically structured table lines which can be taken from different database tables.
    Advantages of Logical database -
    1) No need of programming for retrieval , meaning for data selection
    2) Easy to use standard user interface, have check completeness of user input.
    Disadvantages
    1) Fast in case of lesser no. of tables But if the table is in the lowest level of hierarchy, all upper level tables should be read so performance is slower.
    GO THROUGH LINKS -
    http://www.sap-basis-abap.com/saptab.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9bfa35c111d1829f0000e829fbfe/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9b5e35c111d1829f0000e829fbfe/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/c6/8a15381b80436ce10000009b38f8cf/frameset.htm
    1. A logical database is in fact
    a program only.
    2. This LDB provides two main things :
    a) a pre-defined selection screen
    which handles all user inputs and validations
    b) pre defined set of data
    based upon the user selection.
    3. So we dont have to worry about from
    which tables to fetch data.
    4. Moreover, this LDB Program,
    handles all user-authorisations
    and is efficient in all respects.
    5. tcode is SLDB
    good info about Logical Database. you can check the link.
    http://www.geekinterview.com/question_details/1506
    http://help.sap.com/saphelp_46c/helpdata/EN/35/2cd77bd7705394e10000009b387c12/frameset.htm
    Re: How to Create and Use ldb in reports?
    Re: Logical databases
    http://help.sap.com/saphelp_46c/helpdata/en/9f/db9bed35c111d1829f0000e829fbfe/frameset.htm
    Functions for displaying and changing logical databases:
    Call Transaction SE36 or
    Choose ABAP Workbench -> Development -> Programming environ. -> Logical databases
    Interaction between database program and report:
    During program processing, subroutines are performed in the database program and events are executed in the report.
    To read data from a database tables we use logical database.
    A logical database provides read-only access to a group of related tables to an ABAP/4 program.
    advantages:-
    The programmer need not worry about the primary key for each table.Because Logical database knows how the different tables relate to each other,and can issue the SELECT command with proper where clause to retrieve the data.
    i)An easy-to-use standard user interface.
    ii)check functions which check that user input is complete,correct,and plausible.
    iii)meaningful data selection.
    iv)central authorization checks for database accesses.
    v)good read access performance while retaining the hierarchical data view determined by the application logic.
    disadvantages:-
    i)If you donot specify a logical database in the program attributes,the GET events never occur.
    ii)There is no ENDGET command,so the code block associated with an event ends with the next event
    statement (such as another GET or an END-OF-SELECTION).
    1. transaction code SLDB.
    2.enter name z<ldb-name>
    3.create
    4.short text
    5.create
    6. name of root node (here Ekko)
    7. enter short text (f6)
    8.node type -> data base table.
    9.create
    10 change logical DB
    riht click on ekko and insert node
    here node name ekpo
    11.create
    12. click on selections
    13. press no Should the changed structure of Z<ldb name> be saved first.
    14.select tables which you want to join.
    15.transfer
    16 now you have to o to coding part.
    17. save
    activate.
    19.click to src code
    double click on first include and activate
    Regards
    Vasu

Maybe you are looking for

  • Essbase Log to a Cube

    I am looking for a download that was there on Hyperion forum download before Oracle buy out or some where. The download kit was to load Essbase Log to an Essbase Cube to help analyze the activity. Kit includes .rul file and base outline structure etc

  • Creation of Master-Detail Form for new record.

    Hi Friends, This is my requirements. Please help me to complete my task. I have two tables Master and Detail. Detail have foreign key relationship with Master. I want to design the page Master table columns are in header portion and Detail table colu

  • Feasability and approach? I want users to book "shared project calendar" open time slots with me to do a device upgrade.

    Brought in for a limited short time / term project.  I am seeking to minimize the amount of time spend making appointments with individual users who have been selected for a computer upgrade.  2 morning and two afternoon time slots are available each

  • Hopefully easy answer. How can I get a slider to only have integer values?

    I'm trying out a slider from the example page to let users chooice 5-100, but the slider also lets values be doubles. Is there any way to limit the slider to only lock on to values when they're doubles?

  • Need help knowing best way to create record

    What is the best way to create a new record... I have a form pulling info from about 20 tables. When the user wants to create a new record, I want a new form to open and allow them to select the values based on the foreign key fields so it makes more