A typical mathematical probplem programming

hi guys ..I am in learning stage of Java but finding it quite promising..However , I am faced with a small mathematical problem that needs to be coded..
The question is enumerated below for clarity :-
Assume a 2D coordinate system.( A rectangle bounds the coordinate system) i.e. X axis ( 0 , Xmax) , Y axis (0, Y max)
Assume three points on X-axis (program allows for selecting them anywhere on X axis). Call these points A , B , C
Lets take a 2-D point P in the system (PX1,PY1). Assuming we have the data (PA - PB) and PB-PC (basically the diff of lengths).
Now i need to scan the rectangular bound to actually find out this unique point P such that PA-PB and PB-PC match.
I am trying to code this in Java. Could anyone guide me on this. This small program would lead to a very innovative solution in field of Acoustics.
Looking forward to some help.
regards
deepak

hi.. I meant innovative solution towards acoustic detection and tracking of targets..how do i intend to go about it is little comprehensive and may not be relevant to this forum..In case it is insisted upon i will share the same..Also , PA -PB is not equal to PB-PC...I will elaborate for clarity..The positions at A , B , C are three microphones which will pick up sound from the rectangular area. Say for Eg if point P is the source of sound , then the three microphones will pick it up with corresponding phase shifts due to difference in lengths covered by acoustic wave. So through treatment of the three waveforms by mics I would have the actual difference in length (PB-PA and PC-PB , PC-PA)..Now if i reverse engineering the whole scenario , and for every point in rectangular area of my program calculates PB-PA and PC-PB (PC-PA could be redundancy check) , there would probably be only one unique point that would correspond to this.
Now , I just need a Java coded program to do this problem..The program as per me will run a loop ( i loop for x coordinate and j loop for y coordinate) and calculate for every point the length between the point and A , B , C and calculate the diff and check with the given data...The syntax and new learning of java makes it a little difficult at this stage... I would request some guidance on this..thanks ...deepak

Similar Messages

  • Setting Default Indesign program with multiple versions installed Snow Leopard

    Hi,
    when having 2 versions of Indesign installed (CS4 and CS5), Snow Leopard uses CS5 as the default application to open INDD files.
    Typically if you want to set a particular program as the default program, you right-click on the file, go to "get Info", then "open with" and pick the applications from the pulldown menu and then click right below it "Change All". Typically now that program has become the default program. This has worked all the time except in this case. When trying to set Indesign CS4 as the default program for all INDD files (use CS4 99% of the time and all INDD files on the system were created using CS4) it will not let me use the "change all" button. It automatically goes back to CS5. I want Indesign CS4 to be the default program.
    Any suggestion is greatly appreciated.
    Thanks.

    Change file associations manually or have this freebie jewel do it for you. Most likely CS5 will take over again. Otherwise you can go fish on http://hints.macworld.com/ for another way.
    Rubicode - RCDefaultApp

  • When I click on my Firefox desktop shortcut, this message appears "The application or DLL C:\Program Files\Mozilla Firefox\MOZCRT19.dll is not a valid Windows image. Please check this against your instillation diskette". I cannot enter the program.

    I believe that my question gave all the details. I am wireless in my home and the other computers connected to this wireless have no problem getting into Firefox. What is MOZCRT19?

    clementem,
    This means there is a problem with the files you need to run Firefox.
    Try a clean reinstall:
    1. Download the Firefox installer from [http://mozilla.com Mozilla.com] and save it to your desktop or other location.
    2. Make sure that Firefox is closed completely
    3. Delete the Firefox installation directory (On Windows, this is typically the C:\Program Files\Mozilla Firefox folder that contains "firefox.exe" and other Firefox program files and folders).
    4. Reinstall Firefox by running the installer you downloaded previously.
    Your personal settings, including bookmarks and history, will not be lost.
    See http://kb.mozillazine.org/Standard_diagnostic_-_Firefox#Clean_reinstall

  • Problems with getting waveform using tektronix tds 6604

    hello,
    i'm using the scope tds 6604 with tektronix
    lv-driver (TKTDS6K) under lv 6.0.2.
    I always want to read the displayed waveform via GPIB.
    This is not a problem. The waveform and the scales are correct. BUT if I stop the aquisition on the scope
    Run/Stop-Button) and then change the time- or
    amplitude-scale or the position of the graph on the scope, I read the correct waveform but the scales are incorrect. I tested it for a while but I could not see a system. Perhaps do I have to use some mathematical operations with the array, in order to receive again the correct scalings and positions?
    thanks in advance, sven

    without looking at the actual LabVIEW driver, my guess would be that the VI's graph settings are obtained by the VI that you use to programatically set vertical and horizontal settings of the scope. If you manually change the scope, the capture waveform VI has no idea what the settings are because all it does is retrieve the waveform and doesn't do a query to get updated scope settings. If mixing program and manual control is soemthing that you need to do, you'll have to add the code to query the scope for it's settings. Most drivers don't have these queries because typically, only the program is in control.

  • Lync Reverse Proxy Alternatives

    When migrating from OCS 2007 to Lync 2010, we balked Microsoft’s recommendation to deploy Forefront Threat Management Gateway (or ISA) just to get the reverse proxy services. 
    TMG is way too expensive and complex for such a limited, simple use case.
    I didn't find much information on what people are using as free alternatives to ISA/TMG, so I decided to post this discussion in case there are others out there who are interested.
    We decided to use Apache 2.2 on Windows Server 2008 R2. 
    Here's how we configured it:
    Read here to understand what features require a reverse proxy, and follow the steps to configure your FQDNs, Network Adapters and (maybe) obtain an SSL Certificate for the reverse proxy. 
    http://technet.microsoft.com/en-us/library/gg398069.aspx
    Download and install the latest stable release of Apache with OpenSSL on your reverse proxy server. 
    http://httpd.apache.org/download.cgi
    We're using the same certificate on the reverse proxy that we use on our front end server (it has the appropriate SANs), so we need to convert it to PEM format for use with Apache:
    Use the Certificates MMC on your front end server to export the certificate and include the private key.
    Transfer the resultant .pfx file to your reverse proxy server.
    Use OpenSSL to convert your .pfx file to PEM:
    openssl pkcs12 -in c:\pathto\yourcert.pfx -out c:\pathto\yourcert.pem –nodes 
    Separate the private key from the certificate using notepad: 
    Open the new .pem file and cut the text from the beginning of the file through the end of the “----END RSA PRIVATE KEY----“ tag. 
    Save that text to a new file named
    yourcert.key. 
    Save
    yourcert.pem, which should now only include the certificate.
    Copy (or move) the certificate and private key to the Apache configuration directory. We like to use: C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\extra\ssl
    for storing the certificates.
    Edit httpd.conf (typically in
    C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf) to enable and configure the proxy and SSL features:
    (See  http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
     for more information on each directive)
    Uncomment the following lines, which will enable proxy and SSL:
    LoadModule proxy_module modules/mod_proxy.so
    LoadModule proxy_http_module modules/mod_proxy_http.so
    LoadModule ssl_module modules/mod_ssl.so
    Include conf/extra/httpd-ssl.conf
    Add the following lines to configure reverse proxy behavior:
    #Be a reverse proxy, not a forward proxy
    ProxyRequests Off
    #Accept requests from any client to any URL
    <Proxy *>
    Order Deny,Allow
    Allow from all
    </Proxy>
    #Set the network buffer to improve throughput
    ProxyReceiveBufferSize 4096
    #Configure the Reverse Proxy to forward all requests to your front end server on 4443
    ProxyPass / https://yourfrontend.domain.com:4443/
    ProxyPassReverse / https://yourfrontend.domain.com:4443/
    #Preserve Host Headers for Lync
    ProxyPreserveHost On
    Optionally, configure logging directives, bindings and server name.
    Save and close httpd.conf
    Edit httpd-ssl.conf (typically in conf\extra):
    Configure the session cache:
    Uncomment:
    SSLSessionCache “dbm:C:/Program Files (x86)/Apache Software Foundation/Apache2.2/logs/ssl_scache”
    Comment out:
    SSLSessionCache “shmcb:C:/Program Files (x86)/Apache Software Foundation/Apache2.2/logs/ssl_scache(512000)”
    Locate the <VirtualHost _default_:443> tag and configure the following:
    Add the following directive:
    SSLProxyEngine On
    Configure the path to your SSL Certificate saved in step 3-5 above:
    SSLCertificateFile “C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\extra\ssl\yourcert.pem”
    Configure the path to your private key saved in step 3-5 above:
    SSLCertificateKeyFile “C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\extra\ssl\yourcert.key”
    Optionally, configure the SSLCACertificateFile (you can download the appropriate bundle from your CA).
    Optionally, configure logging directives.
    Save and close httpd-ssl.conf
    Restart the Apache2.2 service
    Configure public DNS records and appropriate firewall rules to allow public http/https traffic to the external interface of your reverse proxy, and to allow the internal interface of
    the reverse proxy to talk to the front end Lync server on 8080 and 4443.
    From an external connection, test connectivity through the reverse proxy:
    Test
    https://dialin.company.com (friendly URL for getting dial-in information, if you’re using voice conferencing)
    Test the Lync Web App by setting up an online meeting and following the URL to join the meeting. 
    You can force the use of the web app by appending ?sl= to the end of the meet.company.com link. 
    See this for more information http://blogs.technet.com/b/jenstr/archive/2010/11/30/launching-lync-web-app.aspx
    Hope this information is helpful and saves some of you some money and trouble.
    Please contact me if you need further clarification or see any mistakes in my notes.
    Best regards,
    Kenneth Walden
    Enterprise Systems Supervisor
    GSD&M
    Austin, TX

    I'd like to thank you for this article.  We were setting up Apache RP for Lync .... needless to say they weren't too excited to learn this new (and highly complex with lots of specific undocumented requirements) Microsoft product.  Anyways, your
    blog saved me a LOT of headache.  I owe you big time. 
    AWESOME JOB. 
    -Greg
    *****EDIT***
    Decided to come back in there and post good information.  We had issues with EXTERNAL and ANONYMOUS users being able to attend a meeting.  The "DIALUP" url was working fine but the "MEETING" url was broken.  On our WFE servers we were getting
    the event error as below.   Turns out that our reverse proxy was not set to "PROXYPRESERVEHOST ON".  Once we put that in there ALL was good.
    Notice that the MEET portion was the only thing that was really broken.  So, if you can get DIALUP to work, but MEET doesn't ... your RP is working to FW the 443 to the 4443 correctly but you're RP is sending the wrong HEADER.  Look for
    http://10.x.x.x/meet/ or soemthing in the event logs. 
    Log Name:      Application
    Source:        ASP.NET 2.0.50727.0
    Date:          11/16/2011 1:26:35 PM
    Event ID:      1309
    Task Category: Web Event
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      OneofMyInternalWFEservers.local
    Description:
    Event code: 3005
    Event message: An unhandled exception has occurred.
    Event time: 11/16/2011 1:26:35 PM
    Event time (UTC): 11/16/2011 6:26:35 PM
    Event ID: b2039ecd0a62482284030f62e1e639d8
    Event sequence: 129
    Event occurrence: 28
    Event detail code: 0
    Application information:
        Application domain: /LM/W3SVC/34578/ROOT/meet-1-129658725547585993
        Trust level: Full
        Application Virtual Path: /meet
        Application Path: C:\Program Files\Microsoft Lync Server 2010\Web Components\Join Launcher\Ext\
        Machine name: MYWFE.local
    Process information:
        Process ID: 14204
        Process name: w3wp.exe
        Account name: NT AUTHORITY\NETWORK SERVICE
    Exception information:
        Exception type: HttpException
        Exception message: Server cannot append header after HTTP headers have been sent. 
    Request information:
        Request URL:
    https://FQDN:4443/meet/MyName/456456
        User host address: gatewayIP
        User: 
        Is authenticated: False
        Authentication Type: 
        Thread account name: NT AUTHORITY\NETWORK SERVICE
    Thread information:
        Thread ID: 7
        Thread account name: NT AUTHORITY\NETWORK SERVICE
        Is impersonating: False
        Stack trace:    at System.Web.HttpHeaderCollection.SetHeader(String name, String value, Boolean replace)
       at Microsoft.Rtc.Internal.WebServicesAuthFramework.OCSAuthModule.EndRequest(Object source, EventArgs e)
       at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
       at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
    Custom event details:
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="ASP.NET 2.0.50727.0" />
        <EventID Qualifiers="32768">1309</EventID>
        <Level>3</Level>
        <Task>3</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2011-11-16T18:26:35.000000000Z" />
        <EventRecordID>4483</EventRecordID>
        <Channel>Application</Channel>
        <Computer>XXXXXXXXXXXXXXXXXX</Computer>
        <Security />
      </System>
      <EventData>
        <Data>3005</Data>
        <Data>An unhandled exception has occurred.</Data>
        <Data>11/16/2011 1:26:35 PM</Data>
        <Data>11/16/2011 6:26:35 PM</Data>
        <Data>b2039ecd0a62482284030f62e1e639d8</Data>
        <Data>129</Data>
        <Data>28</Data>
        <Data>0</Data>
        <Data>/LM/W3SVC/34578/ROOT/meet-1-129658725547585993</Data>
        <Data>Full</Data>
        <Data>/meet</Data>
        <Data>C:\Program Files\Microsoft Lync Server 2010\Web Components\Join Launcher\Ext\</Data>
        <Data>SNKXS300</Data>
        <Data>
        </Data>
        <Data>14204</Data>
        <Data>w3wp.exe</Data>
        <Data>NT AUTHORITY\NETWORK SERVICE</Data>
        <Data>HttpException</Data>
        <Data>Server cannot append header after HTTP headers have been sent.</Data>
        <Data>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</Data>
        <Data>/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</Data>
        <Data>10.71.1.1</Data>
        <Data>
        </Data>
        <Data>False</Data>
        <Data>
        </Data>
        <Data>NT AUTHORITY\NETWORK SERVICE</Data>
        <Data>7</Data>
        <Data>NT AUTHORITY\NETWORK SERVICE</Data>
        <Data>False</Data>
        <Data>   at System.Web.HttpHeaderCollection.SetHeader(String name, String value, Boolean replace)
       at Microsoft.Rtc.Internal.WebServicesAuthFramework.OCSAuthModule.EndRequest(Object source, EventArgs e)
       at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
       at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean&amp; completedSynchronously)
    </Data>
      </EventData>
    </Event>

  • Adobe Reader 10.1.4 Lock-Ups

    Hello,
    When our campus users launch Adobe Reader 10.1.4 after logging on to a campus machine via a network account (with a re-directed app data folder) and launch a PDF from a network location by double-clicking it from Windows Explorer, Adobe Reader will launch, but then lock up and enter a "Not Responding" state after a few seconds of working with the PDF. I can only reproduce this issue with both the Enable Protected Mode at startup and the Show me messages when I launch Reader boxes checked (if either box is unchecked, the lock-up does not occur). If I wait long enough, sometimes, the "Not Responding" state will end and I will be able to view the PDF without issues. However, after a single reboot, the problem re-occurs. Our campus machines are on the 32-bit Windows 7 OS.
    I have been able to reproduce this issue in Adobe Reader 10.1.2, 10.1.3, and 10.1.4, but not in Adobe Reader 10.1.1. I am not able to reproduce this issue using a local account opening a local PDF, a local account opening a network PDF, or using a network account to open a local PDF. The issue also does not occur if the user first launches Adobe Reader, and then opens the network PDF using File->Open. In other words, the issue only occurs when a network account opens a network PDF by double-clicking on it from Windows Explorer.
    I did some research in the Adobe Reader forums, and here are the workarounds that I found and my success with them:
    1) Uncheck Enable Protected Mode at startup
    As I stated before, this does work around the problem, but also decreases security. As a troubleshooting step, I tried using the Adobe Reader documentation to white-list everything that Protected Mode was 'blocking'. The lock-ups still occurred even though the Protected Mode log was empty (after whitelisting) and did not indicate that anything was being blocked.
    2) Disable Enable Hardware rendering for legacy video cards
    This workaround did not work for me, even after disabling the legacy video cards and switching  the preferred renderer to software.
    3) Uninstall and reinstall Adobe Reader
    This workaround was also ineffective.
    4) Uncheck Show me messages when I launch Reader
    As I stated before, this does work around the problem.
    But these are not fixes - just workarounds. Does anyone know what the root problem is and whether or not Adobe will be addressing this issue?
    Thank you in advance.

    Hi, I can't offer any solution just agreements that it isn't just yourselves but others shared experiences:
    The lockup began several years ago and subsequent versions of Reader have not altered the lockup problem one bit.
    Typically the whole program loads, as does the document (it doesn't matter which PDF, any and every PDF cause lockup), you have about 4 seconds to scroll down or whatever action then... application completely freezes, states "not responding" and usually it remains entirely locked for at least 1 to 2 minutes while it hogs majority of system resources. When it releases, program again fully responds. Closing and reopening only repeats the same bad user experience.
    Uninstalling, reintstalling, new versions, have no effect. Inevitably, the main result is to use PDFs significantly less as it is not worth it, yet ultimately it is unavoidable for documents.
    Following your 1,2 and 4 has reduced lockup time to around 50 seconds but still not acceptable for regular use. This reduction and the fact that earlier versions on the same system platform worked 100% fine suggest to me it is additional (and for many users, entirely unecessary) Reader functionality such as trying to get messages, check updates (another to turn off), etc. that halt the software from working smoothly as it should rather than any problem with the host system.
    I'd be very surprised to hear Adobe are unaware of the problem as there are many messages about this from a search and yet seemingly no definitive solution as far as I can find.
    I too will recheck here in hope of a solution emerging.

  • Just installed LE9, but no drum loops - does that sound right?

    Hi,
    I've just installed Logic Express 9 and there were no drum loops, along with lots of other loops in the Loop browser that were greyed out.
    Does Logic Express only come with a few loops, or has something gone wrong in install?
    I have managed to find all the Apple loops from Garage Band and have manually added them to the library, but I was hoping Logic Express would have loops of its own, especially in drums which I found a bit limiting in Garage Band.
    Thanks in advance...

    That depends on how you installed LE. Were you using Custom Install option?
    Typically, LE installation program copies both Logic and its content to your Macintosh HD, but you may choose whether to perform a full installation (software + content) or to install just software without content, using Custom Install, to save the disk space. My guess is, you just installed the program without content (drum loops, etc.) I think, you may try to re-install it, without using the custom install option.

  • Usage reports are not getting in site collection level

    Verifiedthatusage
    data collection is enabled.
    Verified that Analytics Usage, Feature Use, and
    Page Requests are selected under the
    Events to Log.
    Verified that jobs Microsoft SharePoint Foundation Usage Data Import and
    Microsoft SharePoint Foundation Usage Data Processing scheduled and are working fine under
    Log Collection Schedule
    Verified that Analytics Usage definition Receivers is defined.
    Verified that Page Request usage definition Receivers is defined.
    Verified that both managed accounts (admin & service accounts) is having permissions to database in production environment.
    Verified that Search crawl also working fine.
    Usage File is getting created successfully-see attached
    (typically found
    C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\LOGS\Request Usage)
    In this folder, a .usage file should be getting created if all above steps are healthy and working correctly. Ideally we will see a *.tmp file and periodically
    a *.usage file is getting generated.
    As a last option we have tried by re-creating the Usage service application with above steps.

    The amount and variety of formatting makes reading your post very difficult.
    You have contradictory information there; is this for SharePoint 2013, if so this is in the wrong forum, or is this for 2010 in which case your title is misleading.
    You have not described your issues very clearly. I believe that you cannot see site collection analytics, are these blank/empty or throwing an error? Are the usage files appearing in the expected location?
    Have you checked the timer jobs that run nightly to compile the analytics reports?

  • Hi I am getting the following error message henc unable to start firfox at all: error message box as follows XULRunner Error: Platform version 1.9.2.4 is not compatibale with minVersion =1.9.2.6 maxVersion

    Error: Platform version 1.9.2.4 is not compatibale with minVersion >=1.9.2.6 maxVersion =1.9.2.6 maxVersion

    Eric, this can happen after a failed update that partially changed your Firefox install directory.
    Try a clean reinstall:
    1. Download the Firefox installer from [http://mozilla.com Mozilla.com] and save it to your desktop or other location.
    2. Make sure that Firefox is closed completely
    3. Delete the Firefox installation directory (On Windows, this is typically the C:\Program Files\Mozilla Firefox folder that contains "firefox.exe" and other Firefox program files and folders).
    4. Reinstall Firefox by running the installer you downloaded previously.
    Your personal settings, including bookmarks and history, will not be lost.
    See http://kb.mozillazine.org/Standard_diagnostic_-_Firefox#Clean_reinstall

  • Error :Platform version 1.9.2.4 is not compatible w/minVersion =1.9.2.6 maxVersion

    getting error platform version 1.9.2.4 is not compatible w/ minVersion>=1.9.2.6 maxVersion

    Try a clean reinstall:
    1. Download the Firefox installer from [http://mozilla.com Mozilla.com] and save it to your desktop or other location.
    2. Make sure that Firefox is closed completely
    3. Delete the Firefox installation directory (On Windows, this is typically the C:\Program Files\Mozilla Firefox folder that contains "firefox.exe" and other Firefox program files and folders).
    4. Reinstall Firefox by running the installer you downloaded previously.
    Your personal settings, including bookmarks and history, will not be lost.
    See http://kb.mozillazine.org/Standard_diagnostic_-_Firefox#Clean_reinstall

  • How do you use a dmg file image to download windows through boot camp?

    after reaching my limits in finding the answer to this problem/question i have, i finally decided to post it for everyone to feel my pain
    so i dont know if i would be spanked or something but i downloaded a windows xp sp3 from the internet through torrent
    it was originally an iso file so i changed it to a dmg by using disk utility
    now heres the problem:
    after that, i opened the dmg file image and mounted the virtual drive
    then when i started up the boot camp installation and clicked the "Start Installation" after about fifteen seconds it said that it couldn't find any disk installers!
    AARRGG
    it would be awesome if someone could help me
    and this other question that i did not mention above is:
    when you run windows xp on Parallels, is it slower than when you use windows on a PC or through boot camp?
    thanks and hope you have a nice day

    Hi and welcome to Discussions,
    so i dont know if i would be spanked or something but i downloaded a windows xp sp3 from the internet through torrent
    I don't spank you for that but I also don't comment on it as maybe this post will be deleted soon !
    it was originally an iso file so i changed it to a dmg by using disk utility
    now heres the problem:
    after that, i opened the dmg file image and mounted the virtual drive
    then when i started up the boot camp installation and clicked the "Start Installation" after about fifteen seconds it said that it couldn't find any disk installers!
    The BootCamp Assistant requires Windows to be on a bootable CD/DVD and will not work with any kind of images, iso or dmg.
    when you run windows xp on Parallels, is it slower than when you use windows on a PC or through boot camp?
    Yes, since it is not running 'natively' , using the real hardware, but it is running on a virtualized environment.
    How much slower really depends on the kind of programs you use in Windows, with typical Business/Office programs you might not even notice any slower performance, but with programs that depend on having access to the 'real' hardware, you will notice the slowness often right down to be un-useable.
    thanks and hope you have a nice day
    You too
    Stefan

  • How to use a user sequence library

    Hi all,
    I am building up an increasimng set of sequence modules that might be handy in a kind of toolbox library for later re-use. How is this best be accomplished?
    AppNote157 states, that I should '... isolate non-product-specific operations in a callback or the process model...' But how do I accomplish this, how do I test 'em and access 'em from my target sequence?
    Having my own process model open and trying to start a new 'tools' sequence results in runtime errors, because this very same sequence is the process model AND taget sequence. Trying to build a simple test.seq that tries to call such a sequence within the actual process model (my individual one) results in runtime errors - sequence not found.
    How do you organice your
    code?
    Any enlightening will be appreciated!
    TIA and
    Greetings from Germany!
    -- Uwe

    Hi again,
    I'd like to answer my own Q's at least in parts:
    1. One may add tools to the process model file. Always use your individual process model file to do so.
    2. Do not test it from within the process model file! Test it from a client sequence as you would with the target sequence file when running your sequences.
    3. One has to add the process model file path in quotation marks and with doubled backslashes (as typical in many programming languages, where a single backslash acts as a break to enter binary data).
    One also has to add the tool sequence name (the one to be called within the process model) in quotation marks.
    Somehow anoing, isn't it? Instead of an easy way to access library sequences one has to do lots of typing and/or copying&pasting :-(((
    I'll try t
    o acces the process model file path programmatically and put this one into a station global to ease the access to it.
    Probably these tools sequences can also be shifted to an extra sequence as some of NIs own sequences do. For instance the process model sequences call diverse sequences from 'Modelsupport.seq' and 'reportgen_XML.seq'.
    So there is a way to have a user sequence library, but it seems it is either not intended to organize one's code this way or NI uses better techniques that I have not yet found.
    TIA and
    Greetings from Germany!
    -- Uwe

  • For Your Consideration: Ultimate Lync 2010 client install with SCCM 2007

    While the subject of my post may be very presumptuous, I submit the following for your consideration to answer the often-asked question about how to deploy Lync 2010 client with SCCM.
    Background:
    I cannot understand why Microsoft made the Lync install so darned confusing, complex, and convoluted.
    After our Lync 2010 FE server was up and running and all users migrated off our OCS server to the Lync environment, I spent about a month and a half trying to figure out how to:
    1.  Uninstall the OCS 2007 R2 client
    2.  Install all prerequisites for the Lync client
    3.  Install Lync on all user workstations silently.
    While researching this, the simple answer I kept seeing given to this question was, "just use the .exe with the right switches according to the TechNet article here: http://technet.microsoft.com/en-us/library/gg425733.aspx".  Well, my response is, I
    tried that and while the program installed itself correctly pushed through SCCM, because I was doing it using an administrative account (i.e. the SYSTEM account) due to our users not having admin rights, when the install was done, Lync would automatically
    start up, but in the SYSTEM context so that the user couldn't see it was running, they go to run it and it won't run for them.  I was unable to find any switch or option to prevent the automatic launch.  I suppose the simple solution to that would
    be to have the user reboot, but that's unnecessarily disruptive and was contrary to the desire to make this a silent install.
    The next simplest answer I saw was, "extract the MSI and use that with the right switches".  Problem with that is that the MSI by itself doesn't remove the OCS client or install the prerequisites, and also either requires a registry change to even allow
    the MSI to be used or a hacked MSI that bypasses the registry key check.  I tried to put a package together to uninstall OCS, install the prereqs, and use a hacked MSI, but I never could get the MSI hacked properly.  The other problem I ran into
    was detecting if the OCS client was running in a predictable way so I could terminate it, properly uninstall it, and then do the rest of the installations.  It was this problem that ultimately led me to the solution that I'm about to detail and that has
    worked marvellously for us.
    Solution:
    As I said before, when I first looked at this problem, I started by building a typical software deployment package (Computer Management -> Software Distribution -> Packages) and then created the programs to do the install.  My first attempt was
    just with the .exe file provided as-is by Microsoft using the switches they document in the link above for IT-Managed Installation of Lync, and...well, the end result wasn't quite as desirable as hoped.  So, my next attempt was to extract all the prerequisite
    files and the Lync install MSI (both for x86 and x64), attempt to hack it to get around the "UseMSIForLyncInstallation" registry key, and make the command-lines to terminate OCS and uninstall it.
    In the past when I had an install to do with SCCM that also required uninstalling an older version of a given application, I typically used the program-chaining technique.  That's where you have, for example, 3 or more programs that run in a package
    in a sequence and you have Program 3 be set to run after Program 2 does and then set Program 2 to run after Program 1 so you get the desired sequence of Programs 1-2-3 running in that order.  So, I created programs to 1) kill Communicator.exe 2) uninstall
    Communicator 2007 R2 by doing an "msiexec /uninstall {GUID}" 3) install Silverlight 4) install Visual C++ x86 5) optionally install Visual C++ x64, and then 6) install the Lync x86 or x64 client.  That final step was always the point of failure because
    I couldn't get the hacked MSI for the Lync Client install to work.  I also realized that if Communicator wasn't running when the deployment started, that step would fail and cause the whole process to bail out with an error.  That's one of the downsides
    of program-chaining, if one step fails, SCCM completely bails on the deployment.  This is what also led me to the key to my solution:  TASK SEQUENCES.
    I'm not sure how many people out there look in the "Operating System Deployment" area of SCCM 2007 where Task Sequences normally live, but I also wonder how many people realize that Task Sequences can be used for more than just Operating System deployments. 
    One of the biggest advantages of a task sequence is you can set a step to ignore an error condition, such as if you try to terminate a process that isn't running.  Another advantage is that task sequences have some very good built-in conditionals that
    you can apply to steps, for example, having the sequence skip a step if a certain application (or specific version of an application) is not installed on the machine.  Both of those advantages factor highly into my solution.
    OK, for those who already think this is "TL;DR", here's the step-by-step of how to do this:
    First, you need to extract all the files from the LyncSetup.exe for your needed architectures.  We have a mix of Windows XP and Windows 7 64-bit, so my solution here will take both possibilities into account.  To extract the files, just start up
    the .exe like you're going to install it, but then when the first dialog comes up, navigate to "%programfiles%\OCSetup" and copy everything there to a new location.  The main files you need are: Silverlight.exe, vcredist.exe (the x64 LyncSetup.exe includes
    both x86 and x64 Visual C++ runtimes, you need them both, just rename them to differentiate), and Lync.msi (this also comes in an x86 and x64 flavor, so if you have a mix of architectures in your environment, get both and either put them into their own directories
    or rename them to reflect the architecture).
    For my setup, I extracted the files for the x86 and x64 clients and just dumped them each into directories named after the architectures.
    Next, move these files into a directory to your SCCM file server, whatever it might be that you deploy from, in our case, it was just another volume on our central site server.  Go to the SCCM console into Computer Management -> Software Distribution
    -> Packages and then create a new package, call it something meaningful, and then point to the directory on your SCCM file server for the source files.
    Now you need to create 3 to 5 programs inside the package:
    1.  Name: Silverlight
       Command Line: x86\Silverlight.exe /q     (remember, inside my main Lync install folder on my distribution point, I have an x86 directory for the files from the x86 installer and an x64 folder for the files from the x64 installer. 
    The fact is the Silverlight installer is the same in both, so you only need one of them.)
       On the Environment tab:  Program can run whether or not a user is logged in, runs with administrative rights, Runs with UNC name
       On the Advanced tab:  Suppress program notifications
       All other options leave default.
    2.  Name:  Visual C++ x86
        Command Line:  x86\vcredist_x86.exe /q
       On the Requirements tab: Click the radio button next to "This program can run only on specified client platforms:" and then check off the desired x86 clients.
       Environment and Advanced tabs:  same as Silverlight
       (If you have only x64 clients in your environment, change all x86 references to x64.  If you have a mixed environment, create another program identical to this one, replacing references to x86 with x64.)
    3.  Name:  Lync x86
        Command Line:  msiexec /qn /i x86\Lync.msi OCSETUPDIR="C:\Program Files\Microsoft Lync"  (The OCSETUPDIR fixes the issue with the Lync client wanting to "reinstall" itself every time it starts up)
        Requirements, Environment, and Advanced tabs:  Same as with Visual C++ and Silverlight
        (Same deal as above if you have all x64 clients or a mix, either change this program to reflect or make a second program if necessary)
    Now you need to make the Task Sequence.  Go to Computer Management -> Operating System Deployment -> Task Sequences.  Under the Actions pane, click New -> Task Sequence.  In the Create a New Task Sequence dialog, choose "create a
    new custom task sequence", Next, enter a meaningful name for the task sequence like "Install Microsoft Lync", Next, Next, Close.
    The task sequence will have up to 12 steps in it.  I'll break the steps down into 3 phases, the prereqs phase, uninstall OCS phase, and then Lync install phase.
    Prereqs Phase:
    These are the easiest of the steps to do.  Highlight the task sequence and then in the Actions pane, click Edit.
    1.  Click Add -> General -> Install Software.  Name: "Install Microsoft Silverlight".  Select "Install a single application", browse to the Lync package created earlier and then select the Silverlight program.
    2.  Add -> General -> Install Software.  Name: "Install Microsoft Visual C++ 2008 x86".  Install Single Application, browse to the Lync package, select the Visual C++ x86 package.
    As before, if you're an all-x64 environment, replace the x86 references with x64.  If you have a mixed environment, repeat step 2, replacing x86 with x64.
    3.  Add -> General -> Run Command Line.  Name: "Enable Lync Installation".  This step gets around the UseMSIForLyncInstallation registry requirement.  The Lync client MSI simply looks for the presence of this key when it runs, so
    we'll inject it into the registry now and it doesn't require a reboot or anything.  It just has to be there before the client MSI starts.
    Command Line: reg add "hklm\Software\Policies\Microsoft\Communicator" /v UseMSIForLyncInstallation /t REG_DWORD /d 1 /f
    Uninstall OCS Phase:
    This part consists of up to 6 Run Command Line steps.  (Add -> General -> Run Command Line)
    4.  Name: "Terminate Communicator".  Command Line: "taskkill /f /im communicator.exe".  On the Options page, check the box next to "Continue on error".  This will terminate the Communicator process if it's running, and if it's not, it'll
    ignore the error.
    5.  Name: "Terminate Outlook".  Command Line: "taskkill /f /im OUTLOOK.exe".  Check the "Continue on error" on the Options page here too.  Communicator 2007 hooks into Outlook, so if you don't kill Outlook, it might prompt for a reboot
    because components are in use.
    (NOTE:  If necessary, you could also add another step that terminates Internet Explorer because Communicator does hook into IE and without killing IE, it might require a restart after uninstalling Communicator in the next steps.  I didn't run into
    this in my environment, though.  Just repeat step 5, but replace OUTLOOK.EXE with IEXPLORE.EXE)
    6.  Name: "Uninstall Microsoft Office Communicator 2007".  Command Line: "msiexec.exe /qn /uninstall {E5BA0430-919F-46DD-B656-0796F8A5ADFF} /norestart" On the Options page:  Add Condition ->  Installed Software -> Browse to the
    Office Communicator 2007 non-R2 MSI -> select "Match this specific product (Product Code and Upgrade Code)".
    7.  Name:  "Uninstall Microsoft Office Communicator 2007 R2".  Command Line:  "msiexec.exe /qn /uninstall {0D1CBBB9-F4A8-45B6-95E7-202BA61D7AF4} /norestart".  On the Options page:  Add Condition -> Installed Software ->
    Browse to the Office Communicator 2007 R2 MSI -> select "Match any version of this product (Upgrade Code Only)".
    SIDEBAR
    OK, I need to stop here and explain steps 6 and 7 in more detail because it was a gotcha that bit me after I'd already started deploying Lync with this task sequence.  I found out after I'd been deploying for a while that a tech in one of our remote
    offices was reinstalling machines and putting the Communicator 2007 non-R2 client on instead of the R2 client, and my task sequence was expecting R2, mostly because I thought we didn't have any non-R2 clients out there.  So, at first I just had our Help
    Desk people do those installs manually, but later on decided to add support for this possibility into my task sequence.  Now, when you normally uninstall something with msiexec, you would use the Product Code GUID in the command, as you see in steps 6
    and 7.  All applications have a Product Code that's unique to a specific version of an application, but applications also have an Upgrade Code GUID that is unique for an application but common across versions.  This is part of how Windows knows that
    Application X version 1.2 is an upgrade to Application X version 1.1, i.e. Application X would have a common Upgrade Code, but the Product Code would differ between versions 1.1 and 1.2.
    The complication comes in that Communicator 2007 and Communicator 2007 R2 have a common Upgrade Code, but different Product Codes and the "MSIEXEC /uninstall" command uses the Product Code, not the Upgrade Code.  This means that if I didn't have step
    6 to catch the non-R2 clients, step 7 would be fine for the R2 clients, but fail on non-R2 clients because the Product Code in the MSIEXEC command would be wrong.  Luckily, we only had one version of the non-R2 client to deal with versus 4 or 5 versions
    of the R2 client.  So, I put the command to remove Communicator 2007 non-R2 first and checked for that specific product and version on the machine.  If it was present, it uninstalled it and then skipped over the R2 step.  If non-R2 was not present,
    it skipped that step and instead uninstalled any version of the R2 client.  It's important that steps 6 and 7 are in the order they are because if you swap them, then you'd have the same outcome as if step 6 wasn't there.  What if neither is on the
    machine?  Well the collection this was targeted to included only machines with any version of Communicator 2007 installed, so this was not a problem.  It was assumed that the machines had some version of Communicator on them.
    8.  Name:  "Uninstall Conferencing Add-In for Outlook".  Command Line:  "msiexec.exe /qn /uninstall {730000A1-6206-4597-966F-953827FC40F7} /norestart".  Check the "Continue on error" on the Options Page and then Add Condition ->
    Installed Software -> Browse to the MSI for this optional component and set it to match any version of the product.  If you don't use this in your environment, you can omit this step.
    9.  Name:  "Uninstall Live Meeting 2007".  Command Line:  "msiexec.exe /qn /uninstall {69CEBEF8-52AA-4436-A3C9-684AF57B0307} /norestart".  Check the "Continue on error" on the Options Page and then Add Condition -> Installed Software
    -> Browse to the MSI for this optional component and set it to match any version of the product.  If you don't use this in your environment, you can omit this step.
    Install Lync phase:
    Now, finally the main event, and it's pretty simple:
    10.  Click Add -> General -> Install Software.  Name: "Install Microsoft Lync 2010 x86".  Select "Install a single application", browse to the Lync package created earlier and then select the "Lync x86" program.  As before, if you
    only have x64 in your environment, replace the x86 with x64, or if you have a mixed environment, copy this step, replacing x86 references with x64.
    And the task sequence is done!  The final thing you need to do now is highlight the task, click Advertise in the Actions pane, and deploy it to a collection like you would with any other software distribution advertisement.  Go get a beer!
    Some final notes to keep in mind:
    1.  You can't make a task sequence totally silent...easily.  Users will get balloon notifications that an application is available to install.  The notifications cannot be suppressed through the GUI.  I've found scripts that supposedly
    hack the advertisement to make it be silent, but neither of them worked for me.  It was OK, though because in the end we wanted users, especially laptop users, to be able to pick a convenient time to do the upgrade.  The task sequence will appear
    in the "Add/Remove Programs" or "Programs and Features" Control Panel.  You can still do mandatory assignments to force the install to happen, you just can't make it totally silent.  On the plus side, the user shouldn't have to reboot at any point
    during or after the install!
    2.  In the advertisement setup, you can optionally show the task sequence progress.  I've configured the individual installs in this process to be silent, however, I did show the user the task sequence progress.  This means instead of seeing
    5 or 6 Installer windows pop up and go away, the user will have a single progress bar with the name of the step that is executing.
    3.  One step that I didn't consider when I actually did this was starting the Lync client as the user when the install was complete.  The user either had to start the client manually or just let it start on its own at the next logon.  However,
    while I was writing this, I realized that I could possibly start the client after installing by making another Program in the Lync Package with a command line that was along the lines of "%programfiles%\Microsoft Lync\communicator.exe" and then in the Environment
    tab, set it to "Run with user's rights" "only when a user is logged on".
    4.  My first revision of this task sequence has the Prereqs phase happening after the OCS uninstall phase, but I kept running into problems where the Silverlight installer would throw some bizarre error that it couldn't open a window or something wacky
    and it would fail.  Problem was, I couldn't re-run the task sequence because now it would fail because OCS had been uninstalled, so that's why the Prereqs happen first.  It ran much more reliably this way.
    5.  For some reason that baffles me, when I'd check the logs on the Site Server to monitor the deployment, I'd frequently see situations where the task sequence would start on a given machine, complete successfully, almost immediately start again, and
    then fail.  I'm not sure what is causing that, but I suspect either users are going to Add/Remove Programs and double-clicking the Add button to start the install instead of just single-clicking it, or the notification that they have software to install
    doesn't go away immediately or Lync doesn't start up right after the install, so they think the first time it didn't take and try it a second time.
    I hope this helps some of you SCCM and Lync admins out there!

    On Step 8 I found multiple product codes for the Conferencing Add-In for Outlook.  Here's a list of the ones I found in the machines on my network:
    {987CAEDE-EB67-4D5A-B0C0-AE0640A17B5F}
    {2BB9B2F5-79E7-4220-B903-22E849100547}
    {13BEAC7C-69C1-4A9E-89A3-D5F311DE2B69}
    {C5586971-E3A9-432A-93B7-D1D0EF076764}
    I'm sure there's others one, just be mindful that this add-in will have numerous product codes.

  • How to select the data efficiently from the table

    hi every one,
      i need some help in selecting data from FAGLFLEXA table.i have to select many amounts from different group of G/L accounts
    (groups are predefined here  which contains a set of g/L account no.).
    if i select every time for each group then it will be a performance issue, in order to avoid it what should i do, can any one suggest me a method or a smaple query so that i can perform the task efficiently.

    Hi ,
    1.select and keep the data in internal table
    2.avoid select inside loop ..endloop.
    3.try to use for all entries
    check the below details
    Hi Praveen,
    Performance Notes
    1.Keep the Result Set Small
    You should aim to keep the result set small. This reduces both the amount of memory used in the database system and the network load when transferring data to the application server. To reduce the size of your result sets, use the WHERE and HAVING clauses.
    Using the WHERE Clause
    Whenever you access a database table, you should use a WHERE clause in the corresponding Open SQL statement. Even if a program containing a SELECT statement with no WHERE clause performs well in tests, it may slow down rapidly in your production system, where the data volume increases daily. You should only dispense with the WHERE clause in exceptional cases where you really need the entire contents of the database table every time the statement is executed.
    When you use the WHERE clause, the database system optimizes the access and only transfers the required data. You should never transfer unwanted data to the application server and then filter it using ABAP statements.
    Using the HAVING Clause
    After selecting the required lines in the WHERE clause, the system then processes the GROUP BY clause, if one exists, and summarizes the database lines selected. The HAVING clause allows you to restrict the grouped lines, and in particular, the aggregate expressions, by applying further conditions.
    Effect
    If you use the WHERE and HAVING clauses correctly:
    • There are no more physical I/Os in the database than necessary
    • No unwanted data is stored in the database cache (it could otherwise displace data that is actually required)
    • The CPU usage of the database host is minimize
    • The network load is reduced, since only the data that is required by the application is transferred to the application server.
    Minimize the Amount of Data Transferred
    Data is transferred between the database system and the application server in blocks. Each block is up to 32 KB in size (the precise size depends on your network communication hardware). Administration information is transported in the blocks as well as the data.
    To minimize the network load, you should transfer as few blocks as possible. Open SQL allows you to do this as follows:
    Restrict the Number of Lines
    If you only want to read a certain number of lines in a SELECT statement, use the UP TO <n> ROWS addition in the FROM clause. This tells the database system only to transfer <n> lines back to the application server. This is more efficient than transferring more lines than necessary back to the application server and then discarding them in your ABAP program.
    If you expect your WHERE clause to return a large number of duplicate entries, you can use the DISTINCT addition in the SELECT clause.
    Restrict the Number of Columns
    You should only read the columns from a database table that you actually need in the program. To do this, list the columns in the SELECT clause. Note here that the INTO CORRESPONDING FIELDS addition in the INTO clause is only efficient with large volumes of data, otherwise the runtime required to compare the names is too great. For small amounts of data, use a list of variables in the INTO clause.
    Do not use * to select all columns unless you really need them. However, if you list individual columns, you may have to adjust the program if the structure of the database table is changed in the ABAP Dictionary. If you specify the database table dynamically, you must always read all of its columns.
    Use Aggregate Functions
    If you only want to use data for calculations, it is often more efficient to use the aggregate functions of the SELECT clause than to read the individual entries from the database and perform the calculations in the ABAP program.
    Aggregate functions allow you to find out the number of values and find the sum, average, minimum, and maximum values.
    Following an aggregate expression, only its result is transferred from the database.
    Data Transfer when Changing Table Lines
    When you use the UPDATE statement to change lines in the table, you should use the WHERE clause to specify the relevant lines, and then SET statements to change only the required columns.
    When you use a work area to overwrite table lines, too much data is often transferred. Furthermore, this method requires an extra SELECT statement to fill the work area. Minimize the Number of Data Transfers
    In every Open SQL statement, data is transferred between the application server and the database system. Furthermore, the database system has to construct or reopen the appropriate administration data for each database access. You can therefore minimize the load on the network and the database system by minimizing the number of times you access the database.
    Multiple Operations Instead of Single Operations
    When you change data using INSERT, UPDATE, and DELETE, use internal tables instead of single entries. If you read data using SELECT, it is worth using multiple operations if you want to process the data more than once, other wise, a simple select loop is more efficient.
    Avoid Repeated Access
    As a rule you should read a given set of data once only in your program, and using a single access. Avoid accessing the same data more than once (for example, SELECT before an UPDATE).
    Avoid Nested SELECT Loops
    A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. You should therefore only use nested SELECT loops if the selection in the outer loop contains very few lines.
    However, using combinations of data from different database tables is more the rule than the exception in the relational data model. You can use the following techniques to avoid nested SELECT statements:
    ABAP Dictionary Views
    You can define joins between database tables statically and systemwide as views in the ABAP Dictionary. ABAP Dictionary views can be used by all ABAP programs. One of their advantages is that fields that are common to both tables (join fields) are only transferred once from the database to the application server.
    Views in the ABAP Dictionary are implemented as inner joins. If the inner table contains no lines that correspond to lines in the outer table, no data is transferred. This is not always the desired result. For example, when you read data from a text table, you want to include lines in the selection even if the corresponding text does not exist in the required language. If you want to include all of the data from the outer table, you can program a left outer join in ABAP.
    The links between the tables in the view are created and optimized by the database system. Like database tables, you can buffer views on the application server. The same buffering rules apply to views as to tables. In other words, it is most appropriate for views that you use mostly to read data. This reduces the network load and the amount of physical I/O in the database.
    Joins in the FROM Clause
    You can read data from more than one database table in a single SELECT statement by using inner or left outer joins in the FROM clause.
    The disadvantage of using joins is that redundant data is read from the hierarchically-superior table if there is a 1:N relationship between the outer and inner tables. This can considerably increase the amount of data transferred from the database to the application server. Therefore, when you program a join, you should ensure that the SELECT clause contains a list of only the columns that you really need. Furthermore, joins bypass the table buffer and read directly from the database. For this reason, you should use an ABAP Dictionary view instead of a join if you only want to read the data.
    The runtime of a join statement is heavily dependent on the database optimizer, especially when it contains more than two database tables. However, joins are nearly always quicker than using nested SELECT statements.
    Subqueries in the WHERE and HAVING Clauses
    Another way of accessing more than one database table in the same Open SQL statement is to use subqueries in the WHERE or HAVING clause. The data from a subquery is not transferred to the application server. Instead, it is used to evaluate conditions in the database system. This is a simple and effective way of programming complex database operations.
    Using Internal Tables
    It is also possible to avoid nested SELECT loops by placing the selection from the outer loop in an internal table and then running the inner selection once only using the FOR ALL ENTRIES addition. This technique stems from the time before joins were allowed in the FROM clause. On the other hand, it does prevent redundant data from being transferred from the database.
    Using a Cursor to Read Data
    A further method is to decouple the INTO clause from the SELECT statement by opening a cursor using OPEN CURSOR and reading data line by line using FETCH NEXT CURSOR. You must open a new cursor for each nested loop. In this case, you must ensure yourself that the correct lines are read from the database tables in the correct order. This usually requires a foreign key relationship between the database tables, and that they are sorted by the foreign key. Minimize the Search Overhead
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You specify the fields of secondary indexes using the ABAP Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
    If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Formulating Conditions for Indexes
    You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan.
    Check for Equality and Link Using AND
    The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
    Use Positive Conditions
    The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE.
    If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
    Using OR
    The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
    Using Part of the Index
    If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
    Checking for Null Values
    The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
    Avoid Complex Conditions
    Avoid complex conditions, since the statements have to be broken down into their individual components by the database system.
    Reduce the Database Load
    Unlike application servers and presentation servers, there is only one database server in your system. You should therefore aim to reduce the database load as much as possible. You can use the following methods:
    Buffer Tables on the Application Server
    You can considerably reduce the time required to access data by buffering it in the application server table buffer. Reading a single entry from table T001 can take between 8 and 600 milliseconds, while reading it from the table buffer takes 0.2 - 1 milliseconds.
    Whether a table can be buffered or not depends its technical attributes in the ABAP Dictionary. There are three buffering types:
    • Resident buffering (100%) The first time the table is accessed, its entire contents are loaded in the table buffer.
    • Generic buffering In this case, you need to specify a generic key (some of the key fields) in the technical settings of the table in the ABAP Dictionary. The table contents are then divided into generic areas. When you access data with one of the generic keys, the whole generic area is loaded into the table buffer. Client-specific tables are often buffered generically by client.
    • Partial buffering (single entry) Only single entries are read from the database and stored in the table buffer.
    When you read from buffered tables, the following happens:
    1. An ABAP program requests data from a buffered table.
    2. The ABAP processor interprets the Open SQL statement. If the table is defined as a buffered table in the ABAP Dictionary, the ABAP processor checks in the local buffer on the application server to see if the table (or part of it) has already been buffered.
    3. If the table has not yet been buffered, the request is passed on to the database. If the data exists in the buffer, it is sent to the program.
    4. The database server passes the data to the application server, which places it in the table buffer.
    5. The data is passed to the program.
    When you change a buffered table, the following happens:
    1. The database table is changed and the buffer on the application server is updated. The database interface logs the update statement in the table DDLOG. If the system has more than one application server, the buffer on the other servers is not updated at once.
    2. All application servers periodically read the contents of table DDLOG, and delete the corresponding contents from their buffers where necessary. The granularity depends on the buffering type. The table buffers in a distributed system are generally synchronized every 60 seconds (parameter: rsdisp/bufreftime).
    3. Within this period, users on non-synchronized application servers will read old data. The data is not recognized as obsolete until the next buffer synchronization. The next time it is accessed, it is re-read from the database.
    You should buffer the following types of tables:
    • Tables that are read very frequently
    • Tables that are changed very infrequently
    • Relatively small tables (few lines, few columns, or short columns)
    • Tables where delayed update is acceptable.
    Once you have buffered a table, take care not to use any Open SQL statements that bypass the buffer.
    The SELECT statement bypasses the buffer when you use any of the following:
    • The BYPASSING BUFFER addition in the FROM clause
    • The DISTINCT addition in the SELECT clause
    • Aggregate expressions in the SELECT clause
    • Joins in the FROM clause
    • The IS NULL condition in the WHERE clause
    • Subqueries in the WHERE clause
    • The ORDER BY clause
    • The GROUP BY clause
    • The FOR UPDATE addition
    Furthermore, all Native SQL statements bypass the buffer.
    Avoid Reading Data Repeatedly
    If you avoid reading the same data repeatedly, you both reduce the number of database accesses and reduce the load on the database. Furthermore, a "dirty read" may occur with database tables other than Oracle. This means that the second time you read data from a database table, it may be different from the data read the first time. To ensure that the data in your program is consistent, you should read it once only and then store it in an internal table.
    Sort Data in Your ABAP Programs
    The ORDER BY clause in the SELECT statement is not necessarily optimized by the database system or executed with the correct index. This can result in increased runtime costs. You should only use ORDER BY if the database sort uses the same index with which the table is read. To find out which index the system uses, use SQL Trace in the ABAP Workbench Performance Trace. If the indexes are not the same, it is more efficient to read the data into an internal table or extract and sort it in the ABAP program using the SORT statement.
    Use Logical Databases
    SAP supplies logical databases for all applications. A logical database is an ABAP program that decouples Open SQL statements from application programs. They are optimized for the best possible database performance. However, it is important that you use the right logical database. The hierarchy of the data you want to read must reflect the structure of the logical database, otherwise, they can have a negative effect on performance. For example, if you want to read data from a table right at the bottom of the hierarchy of the logical database, it has to read at least the key fields of all tables above it in the hierarchy. In this case, it is more efficient to use a SELECT statement.
    Work Processes
    Work processes execute the individual dialog steps in R/3 applications. The next two sections describe firstly the structure of a work process, and secondly the different types of work process in the R/3 System.
    Structure of a Work Process
    Work processes execute the dialog steps of application programs. They are components of an application server. The following diagram shows the components of a work process:
    Each work process contains two software processors and a database interface.
    Screen Processor
    In R/3 application programming, there is a difference between user interaction and processing logic. From a programming point of view, user interaction is controlled by screens. As well as the actual input mask, a screen also consists of flow logic. The screen flow logic controls a large part of the user interaction. The R/3 Basis system contains a special language for programming screen flow logic. The screen processor executes the screen flow logic. Via the dispatcher, it takes over the responsibility for communication between the work process and the SAPgui, calls modules in the flow logic, and ensures that the field contents are transferred from the screen to the flow logic.
    ABAP Processor
    The actual processing logic of an application program is written in ABAP - SAP’s own programming language. The ABAP processor executes the processing logic of the application program, and communicates with the database interface. The screen processor tells the ABAP processor which module of the screen flow logic should be processed next. The following screen illustrates the interaction between the screen and the ABAP processors when an application program is running.
    Database Interface
    The database interface provides the following services:
    • Establishing and terminating connections between the work process and the database.
    • Access to database tables
    • Access to R/3 Repository objects (ABAP programs, screens and so on)
    • Access to catalog information (ABAP Dictionary)
    • Controlling transactions (commit and rollback handling)
    • Table buffer administration on the application server.
    The following diagram shows the individual components of the database interface:
    The diagram shows that there are two different ways of accessing databases: Open SQL and Native SQL.
    Open SQL statements are a subset of Standard SQL that is fully integrated in ABAP. They allow you to access data irrespective of the database system that the R/3 installation is using. Open SQL consists of the Data Manipulation Language (DML) part of Standard SQL; in other words, it allows you to read (SELECT) and change (INSERT, UPDATE, DELETE) data. The tasks of the Data Definition Language (DDL) and Data Control Language (DCL) parts of Standard SQL are performed in the R/3 System by the ABAP Dictionary and the authorization system. These provide a unified range of functions, irrespective of database, and also contain functions beyond those offered by the various database systems.
    Open SQL also goes beyond Standard SQL to provide statements that, in conjunction with other ABAP constructions, can simplify or speed up database access. It also allows you to buffer certain tables on the application server, saving excessive database access. In this case, the database interface is responsible for comparing the buffer with the database. Buffers are partly stored in the working memory of the current work process, and partly in the shared memory for all work processes on an application server. Where an R/3 System is distributed across more than one application server, the data in the various buffers is synchronized at set intervals by the buffer management. When buffering the database, you must remember that data in the buffer is not always up to date. For this reason, you should only use the buffer for data which does not often change.
    Native SQL is only loosely integrated into ABAP, and allows access to all of the functions contained in the programming interface of the respective database system. Unlike Open SQL statements, Native SQL statements are not checked and converted, but instead are sent directly to the database system. Programs that use Native SQL are specific to the database system for which they were written. R/3 applications contain as little Native SQL as possible. In fact, it is only used in a few Basis components (for example, to create or change table definitions in the ABAP Dictionary).
    The database-dependent layer in the diagram serves to hide the differences between database systems from the rest of the database interface. You choose the appropriate layer when you install the Basis system. Thanks to the standardization of SQL, the differences in the syntax of statements are very slight. However, the semantics and behavior of the statements have not been fully standardized, and the differences in these areas can be greater. When you use Native SQL, the function of the database-dependent layer is minimal.
    Types of Work Process
    Although all work processes contain the components described above, they can still be divided into different types. The type of a work process determines the kind of task for which it is responsible in the application server. It does not specify a particular set of technical attributes. The individual tasks are distributed to the work processes by the dispatcher.
    Before you start your R/3 System, you determine how many work processes it will have, and what their types will be. The dispatcher starts the work processes and only assigns them tasks that correspond to their type. This means that you can distribute work process types to optimize the use of the resources on your application servers.
    The following diagram shows again the structure of an application server, but this time, includes the various possible work process types:
    The various work processes are described briefly below. Other parts of this documentation describe the individual components of the application server and the R/3 System in more detail.
    Dialog Work Process
    Dialog work processes deal with requests from an active user to execute dialog steps.
    Update Work Process
    Update work processes execute database update requests. Update requests are part of an SAP LUW that bundle the database operations resulting from the dialog in a database LUW for processing in the background.
    Background Work Process
    Background work processes process programs that can be executed without user interaction (background jobs).
    Enqueue Work Process
    The enqueue work process administers a lock table in the shared memory area. The lock table contains the logical database locks for the R/3 System and is an important part of the SAP LUW concept. In an R/3 System, you may only have one lock table. You may therefore also only have one application server with enqueue work processes.
    Spool Work Process
    The spool work process passes sequential datasets to a printer or to optical archiving. Each application server may contain several spool work process.
    The services offered by an application server are determined by the types of its work processes. One application server may, of course, have more than one function. For example, it may be both a dialog server and the enqueue server, if it has several dialog work processes and an enqueue work process.
    You can use the system administration functions to switch a work process between dialog and background modes while the system is still running. This allows you, for example, to switch an R/3 System between day and night operation, where you have more dialog than background work processes during the day, and the other way around during the night.
    ABAP Application Server
    R/3 programs run on application servers. They are an important component of the R/3 System. The following sections describe application servers in more detail.
    Structure of an ABAP Application Server
    The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server.
    The following diagram shows the structure of an application server:
    The individual components are:
    Work Processes
    An application server contains work processes, which are components that can run an application. Work processes are components that are able to execute an application (that is, one dialog step each). Each work process is linked to a memory area containing the context of the application being run. The context contains the current data for the application program. This needs to be available in each dialog step. Further information about the different types of work process is contained later on in this documentation.
    Dispatcher
    Each application server contains a dispatcher. The dispatcher is the link between the work processes and the users logged onto the application server. Its task is to receive requests for dialog steps from the SAP GUI and direct them to a free work process. In the same way, it directs screen output resulting from the dialog step back to the appropriate user.
    Gateway
    Each application server contains a gateway. This is the interface for the R/3 communication protocols (RFC, CPI/C). It can communicate with other application servers in the same R/3 System, with other R/3 Systems, with R/2 Systems, or with non-SAP systems.
    The application server structure as described here aids the performance and scalability of the entire R/3 System. The fixed number of work processes and dispatching of dialog steps leads to optimal memory use, since it means that certain components and the memory areas of a work process are application-independent and reusable. The fact that the individual work processes work independently makes them suitable for a multi-processor architecture. The methods used in the dispatcher to distribute tasks to work processes are discussed more closely in the section Dispatching Dialog Steps.
    Shared Memory
    All of the work processes on an application server use a common main memory area called shared memory to save contexts or to buffer constant data locally.
    The resources that all work processes use (such as programs and table contents) are contained in shared memory. Memory management in the R/3 System ensures that the work processes always address the correct context, that is the data relevant to the current state of the program that is running. A mapping process projects the required context for a dialog step from shared memory into the address of the relevant work process. This reduces the actual copying to a minimum.
    Local buffering of data in the shared memory of the application server reduces the number of database reads required. This reduces access times for application programs considerably. For optimal use of the buffer, you can concentrate individual applications (financial accounting, logistics, human resources) into separate application server groups.
    Database Connection
    When you start up an R/3 System, each application server registers its work processes with the database layer, and receives a single dedicated channel for each. While the system is running, each work process is a user (client) of the database system (server). You cannot change the work process registration while the system is running. Neither can you reassign a database channel from one work process to another. For this reason, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. This has important consequences for the programming model explained below.
    Dispatching Dialog Steps
    The number of users logged onto an application server is often many times greater than the number of available work processes. Furthermore, it is not restricted by the R/3 system architecture. Furthermore, each user can run several applications at once. The dispatcher has the important task of distributing all dialog steps among the work processes on the application server.
    The following diagram is an example of how this might happen:
    1. The dispatcher receives the request to execute a dialog step from user 1 and directs it to work process 1, which happens to be free. The work process addresses the context of the application program (in shared memory) and executes the dialog step. It then becomes free again.
    2. The dispatcher receives the request to execute a dialog step from user 2 and directs it to work process 1, which is now free again. The work process executes the dialog step as in step 1.
    3. While work process 1 is still working, the dispatcher receives a further request from user 1 and directs it to work process 2, which is free.
    4. After work processes 1 and 2 have finished processing their dialog steps, the dispatcher receives another request from user 1 and directs it to work process 1, which is free again.
    5. While work process 1 is still working, the dispatcher receives a further request from user 2 and directs it to work process 2, which is free.
    From this example, we can see that:
    • A dialog step from a program is assigned to a single work process for execution.
    • The individual dialog steps of a program can be executed on different work processes, and the program context must be addressed for each new work process.
    • A work process can execute dialog steps of different programs from different users.
    The example does not show that the dispatcher tries to distribute the requests to the work processes such that the same work process is used as often as possible for the successive dialog steps in an application. This is useful, since it saves the program context having to be addressed each time a dialog step is executed.
    Dispatching and the Programming Model
    The separation of application and presentation layer made it necessary to split up application programs into dialog steps. This, and the fact that dialog steps are dispatched to individual work processes, has had important consequences for the programming model.
    As mentioned above, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. The contents of the database must be consistent at its beginning and end. The beginning and end of a database LUW are defined by a commit command to the database system (database commit). During a database LUW, that is, between two database commits, the database system itself ensures consistency within the database. In other words, it takes over tasks such as locking database entries while they are being edited, or restoring the old data (rollback) if a step terminates in an error.
    A typical SAP application program extends over several screens and the corresponding dialog steps. The user requests database changes on the individual screens that should lead to the database being consistent once the screens have all been processed. However, the individual dialog steps run on different work processes, and a single work process can process dialog steps from other applications. It is clear that two or more independent applications whose dialog steps happen to be processed on the same work process cannot be allowed to work with the same database LUW.
    Consequently, a work process must open a separate database LUW for each dialog step. The work process sends a commit command (database commit) to the database at the end of each dialog step in which it makes database changes. These commit commands are called implicit database commits, since they are not explicitly written into the application program.
    These implicit database commits mean that a database LUW can be kept open for a maximum of one dialog step. This leads to a considerable reduction in database load, serialization, and deadlocks, and enables a large number of users to use the same system.
    However, the question now arises of how this method (1 dialog step = 1 database LUW) can be reconciled with the demand to make commits and rollbacks dependent on the logical flow of the application program instead of the technical distribution of dialog steps. Database update requests that depend on one another form logical units in the program that extend over more than one dialog step. The database changes associated with these logical units must be executed together and must also be able to be undone together.
    The SAP programming model contains a series of bundling techniques that allow you to group database updates together in logical units. The section of an R/3 application program that bundles a set of logically-associated database operations is called an SAP LUW. Unlike a database LUW, a SAP LUW includes all of the dialog steps in a logical unit, including the database update.
    Happy Reading...
    shibu

  • Macbook Air right for an engineering major?

    I'm currently a Biomedical and Mechanical Engineering double major at UC Davis, and I wanted some input on whether or not a Macbook Air would be suitable for my needs. The 13" 2013 unit with the Haswell CPU is only $999 on Amazon right now and I figured it is priced so competetively with other ultrabooks now that I can actually consider it.
    As far as I know, I need access to a Python and C/C++ IDE and compiler, MATLAB, and several CAD applications that my school gives us for certain classes. That's pretty much all I need from now through my bachelors degree. I'll also dual boot Windows 8 since I may occasionally miss Windows, since this will be my only PC from now on (I completely stopped gaming so I sold my custom desktop). I'll also run any programs that are Windows only in Windows of course.
    Sure, I will be doing some software design, but the ULV i5 is plenty powerful. I'm not doing some insane rendering or anything. Just some very basic rendering and programming.
    So I have 3 questions:
    1. Is a Macbook Air right for an Engineering major? How has having a Macbook "changed" you as an engineer?
    2. Is there any way to have a Windows Partition, an OSX Partition, and an exFAT partition to share my music and videos across both OS's? Both Windows and OSX can read and write to exFAT right?
    3. What external hard drive and USB drive formats (other than FAT since I have files larger than 4GB) are best for transferring files to and from OSX and Windows machines?
    Thanks for the help! I'm looking forward to purchasing a Mac since they are so competetively priced now. I think Apple mad a great move with these price cuts.

    I second the above opinion, as I am an Electrical Engineering student currently. I've been using an 11" Air for everything (full options, i7, 8gb, 512ssd). I was using a mid 2011 and i just got the new 2013 model and I have to say both have performed outstanding.
    Since you're gonna be an engineer i'll tell you how I run my machine to let you know how powerful it is for school:
    I run 4 desktops that i swipe between constantly with 10+ programs running, most of which are not your normal small programs (Autocad, MotionBuilder, etc.)
    My 5th desktop that i swipe between is Parallels Desktop running windows 7 for the non-mac apps, inside of windows i typically run 3+ programs which include AutoCad, Quartus (electronic modeling) and PSpice.
    Keep in mind that I run all of this at the same time and it's probably too much (I don't need to run everything at once I just love to keep evrything up so between classes I don't have to prepare for the next).
    All I have to say is the machine kills it. i'm serious, no lag, fluid autocad building, low heat, and great battery, i got about 5 hours out of my 2011 model and i expect to see 8+ out of this one when school gets going again.
    You're in the right direction, i think you'll be more than happy with a new Macbook Air if you max out the options, and then you'll increase the resale so in 2-3 years you can grab the latest and greatest as i did.
    Good luck in engineering and don't let it get to you, stick it out and it'll be worthwhile!

Maybe you are looking for