Review etresoft data

I have a 2011 MacBook Pro that has, over time, become slower and clunkier.  I ran EtreCheck but need some help deciphering the results.  I've pasted them below. Any suggestions are appreciated.
EtreCheck version: 2.0.11 (98)
Report generated November 6, 2014 at 9:13:43 PM EST
Hardware Information: ℹ️
  MacBook Pro (13-inch, Late 2011) (Verified)
  MacBook Pro - model: MacBookPro8,1
  1 2.4 GHz Intel Core i5 CPU: 2-core
  4 GB RAM Upgradeable
  BANK 0/DIMM0
  2 GB DDR3 1333 MHz ok
  BANK 1/DIMM0
  2 GB DDR3 1333 MHz ok
  Bluetooth: Old - Handoff/Airdrop2 not supported
  Wireless:  en1: 802.11 a/b/g/n
Video Information: ℹ️
  Intel HD Graphics 3000 - VRAM: 384 MB
  Color LCD 1280 x 800
System Software: ℹ️
  OS X 10.9.5 (13F34) - Uptime: 16 days 0:49:7
Disk Information: ℹ️
  Hitachi HTS545050B9A302 disk0 : (500.11 GB)
  S.M.A.R.T. Status: Verified
  EFI (disk0s1) <not mounted> : 210 MB
  Macintosh HD (disk0s2) /  [Startup]: 499.25 GB (230.23 GB free)
  Recovery HD (disk0s3) <not mounted>  [Recovery]: 650 MB
  MATSHITADVD-R   UJ-8A8 disk1 : (255.8 MB)
  S.M.A.R.T. Status: Verified
  disk1s1 (disk1s1) <not mounted> : 26 MB
  disk1s2 (disk1s2) <not mounted> : 25 MB
  disk1s3 (disk1s3) <not mounted> : 29 MB
  disk1s4 (disk1s4) <not mounted> : 35 MB
  disk1s5 (disk1s5) <not mounted> : 25 MB
  disk1s6 (disk1s6) <not mounted> : 28 MB
  disk1s7 (disk1s7) <not mounted> : 29 MB
  disk1s8 (disk1s8) <not mounted> : 31 MB
  disk1s9 (disk1s9) <not mounted> : 27 MB
USB Information: ℹ️
  Apple Computer, Inc. IR Receiver
  Apple Inc. FaceTime HD Camera (Built-in)
  Apple Inc. BRCM2070 Hub
  Apple Inc. Bluetooth USB Host Controller
  Apple Inc. Apple Internal Keyboard / Trackpad
Thunderbolt Information: ℹ️
  Apple Inc. thunderbolt_bus
Gatekeeper: ℹ️
  Mac App Store and identified developers
Kernel Extensions: ℹ️
  /System/Library/Extensions
  [loaded] com.citrix.driver.net6im (1.1.3) Support
  [loaded] com.deterministicnetworks.driver.dne (1.0.9) Support
  [loaded] com.deterministicnetworks.driver.dniregistry (1.0.4) Support
Startup Items: ℹ️
  EQSharedEngine: Path: /Library/StartupItems/EQSharedEngine
  Startup items are obsolete and will not work in future versions of OS X
Launch Agents: ℹ️
  [loaded] com.equitrac.logincontroller.plist Support
  [loaded] com.oracle.java.Java-Updater.plist Support
Launch Daemons: ℹ️
  [loaded] com.adobe.fpsaud.plist Support
  [running] com.citrix.agadminservice.plist Support
  [running] com.deterministicnetworks.daemon.dniregsvr.plist Support
  [loaded] com.microsoft.office.licensing.helper.plist Support
  [loaded] com.oracle.java.Helper-Tool.plist Support
  [loaded] com.oracle.java.JavaUpdateHelper.plist Support
  [loaded] com.sonos.smbbump.plist Support
User Launch Agents: ℹ️
  [loaded] com.adobe.ARM.[...].plist Support
  [loaded] com.google.keystone.agent.plist Support
User Login Items: ℹ️
  iTunesHelper ApplicationHidden (/Applications/iTunes.app/Contents/MacOS/iTunesHelper.app)
  Dropbox Application (/Applications/Dropbox.app)
  AdobeResourceSynchronizer ApplicationHidden (/Applications/Adobe Reader.app/Contents/Support/AdobeResourceSynchronizer.app)
  Google Chrome ApplicationHidden (/Applications/Google Chrome.app)
  HP Product Research Application (/Library/Application Support/Hewlett-Packard/Customer Participation/HP Product Research.app)
Internet Plug-ins: ℹ️
  JavaAppletPlugin: Version: Java 7 Update 71 Check version
  Default Browser: Version: 537 - SDK 10.9
  AdobePDFViewerNPAPI: Version: 10.1.12 Support
  FlashPlayer-10.6: Version: 15.0.0.189 - SDK 10.6 Support
  Silverlight: Version: 5.1.30514.0 - SDK 10.6 Support
  Flash Player: Version: 15.0.0.189 - SDK 10.6 Support
  AGNetscapePlugin: Version: 1.0 Support
  CitrixICAClientPlugIn: Version: 11.4.3 - SDK 10.0 Support
  QuickTime Plugin: Version: 7.7.3
  AdobePDFViewer: Version: 10.1.12 Support
  SharePointBrowserPlugin: Version: 14.4.5 - SDK 10.6 Support
  MeetingJoinPlugin: Version: (null) - SDK 10.6 Support
3rd Party Preference Panes: ℹ️
  Flash Player  Support
  Java  Support
Time Machine: ℹ️
  Skip System Files: NO
  Mobile backups: OFF
  Auto backup: YES
  Destinations:
  Data [Network]
  Total size: 2.00 TB
  Total number of backups: 34
  Oldest backup: 2014-02-01 23:33:23 +0000
  Last backup: 2014-11-07 01:54:14 +0000
  Size of backup disk: Excellent
  Backup size 2.00 TB > (Disk size 0 B X 3)
Top Processes by CPU: ℹ️
    100% AGAdminService
      11% WindowServer
      10% Safari
      2% hidd
      1% com.apple.WebKit.Networking
Top Processes by Memory: ℹ️
  164 MB com.apple.WebKit.WebContent
  129 MB Safari
  86 MB com.apple.WebKit.Plugin.64
  52 MB softwareupdated
  52 MB com.apple.WebKit.Networking
Virtual Memory Information: ℹ️
  68 MB Free RAM
  1.24 GB Active RAM
  1.18 GB Inactive RAM
  1.10 GB Wired RAM
  3.75 GB Page-ins
  399 MB Page-outs

This could have something to do with it being less responsive than before:
Top Processes by CPU: ℹ️
    100% AGAdminService
I am not sure how long it has been running, but your PageOuts number could indicate you do not have enough RAM memory for the things you are doing at the same time. (Should be a very small number, your is 399MB).
Your MacBook pro 8,1 is rated to take 8GB officially, but it can take up to 16GB of the exact right modules if the microcode is up-to-date.

Similar Messages

  • Shared Review Deadline Dates do not Update - Acrobat Pro. 9

    I recently started two shared reviews in Acrobat Pro 9.0.0 When I attempt to extend the deadline dates on both it updates me in the tracker, but when anyone else in the reveiw attempts to add more comments they are told the review has expried.
    I am the initiator of the reviews, and acording to Adobe I should be able to change the deadline date in the tracker and have it update the other reviewers so they can contine with their reviews. It does extend my deadline, but not anyone elses.
    Does this function not work, should I set all shared reviews with "No Deadline"?
    I am running Windows XP Pro, with 3 gig of memory, on an IBM Think Center.
    The file is on a network shared folder where everyone who is reviewing has read/write rights.

    Here is my system report.
    Available Physical Memory: 2097151 KB
    Available Virtual Memory: 1993900 KB
    BIOS Version: PTLTD - 60400d0
    Default Browser: C:\Program Files\Internet Explorer\iexplore.exe
    Version: 6.00.2900.2180 (xpsp_sp2_rtm.040803-2158)
    Creation Date: 2005/05/27
    Creation Time: 12:45:47 PM
    Default Mail: Lotus Notes
    C:\Program Files\lotus\notes\nmailman.dll
    Version: 5.0.0
    Creation Date: 2006/09/27
    Creation Time: 5:30:08 AM
    Graphics Card: NVIDIA Quadro NVS with AGP8X
    Version: 6.14.10.8167
    Check: Not Supported
    Installed Acrobat: C:\Program Files\Adobe\Acrobat 9.0\Acrobat\Acrobat.exe
    Version: 9.0.0.2008061200
    Creation Date: 2008/06/12
    Creation Time: 1:25:18 AM
    Installed Acrobat: C:\Program Files\Adobe\Reader 9.0\Reader\AcroRd32.exe
    Version: 9.0.0.2008061200
    Creation Date: 2008/06/12
    Creation Time: 1:47:22 AM
    Locale: English (United States)
    Monitor:
    Name: NVIDIA Quadro NVS with AGP8X
    Resolution: 1024 x 768 x 60
    Bits per pixel: 32
    Monitor:
    Name: NVIDIA Quadro NVS with AGP8X
    Resolution: 1680 x 1050 x 60
    Bits per pixel: 32
    OS Manufacturer: Microsoft Corporation
    OS Name: Microsoft Windows XP Professional
    OS Version: 5.1.2600 Service Pack 2
    Page File Space: 4194303 KB
    Processor: x86 Family 15 Model 2 Stepping 9 GenuineIntel ~2793 Mhz
    System Name: DAL-W-WINDLR
    Temporary Directory: C:\DOCUME~1\windlr\LOCALS~1\Temp\
    Time Zone: Central Standard Time
    Total Physical Memory: 2097151 KB
    Total Virtual Memory: 2097024 KB
    User Name: windlr
    Windows Directory: C:\WINDOWS
    Installed plug-ins:
    C:\Program Files\Adobe\Acrobat 9.0\Acrobat\plug_ins\Annots.api
    Version: 9.0.0.2008061200
    Creation Date: 2008/06/12
    Creation Time: 12:59:50 AM
    C:\Program Files\Adobe\Acrobat 9.0\Acrobat\plug_ins\EScript.api
    Version: 9.0.0.2008061200
    Creation Date: 2008/06/12
    Creation Time: 1:05:18 AM
    C:\Program Files\Adobe\Acrobat 9.0\Acrobat\plug_ins\Updater.api
    Version: 9.0.0.2008061200
    Creation Date: 2008/06/12
    Creation Time: 1:00:30 AM

  • Technical Design Review Question: Data target Mapping and transformation

    I got my hands on technical design documentation for a project on COPA budget. I came up with a few questions but I will post them separately for fast closing and awards:
    In the discussions of Data target Mapping and transformation, there was a table of characteristics, showing dimensions, BW filed, Source field, data type, etc.
    1. What is the technique in deciding which characteristics get grouped together into a particular dimension?
    2. Why do some dimensions only have one characteristic and what is its significance?
    3. I saw one BW field, OVAL_TYPE (description= valuation type) included in three dimensions: Customer, Material and Valuation Type(only field=OVAL_TYPE). What is the significance of this repetition?
    Thanks

    Morning,
    To define Dimension means, to group all characters together, which do have a relationship "1 to n" and not "n to m" to reduce data volume (cardinality). If you reduce cardinality (which means to define 1:m relationship whenever it is possible and group them within one common dimension) you increase performance (make performance better) because of less data volume. It is essential to understand that this can not be changed easily after definition in PROD. So data modeling is also from this point of view very important.
    Example:
    An accounting object has a "m:n" relation ship to the accounting partner object. that's the reason way accounting object and accounting partner object do not belong to the same dimension.
    Ni hao
    Eckhard Lewin

  • Can a reviewer edit the budget data

    When an owner prepares a budget and promote the the planning unit to the next person in hierarchy. Then can this reviewer make changes in the promoted budget data? we made two users with the planner roles and tested it by giving the ownership to the reviewer after budget was promoted to him, but even then form were not available for editing.

    Reviewers will not be able to edit data, they can only review the data.
    HTH -
    Jasmine.

  • Mysterious Data Usage

    Hello,
    My wife owns an HTC Thunderbolt.  We are experiencing mysterious data usage. 
    The data log shows data usage during hours and at times when the phone is not being used.  We have tried closing all apps and resetting the phone.
    Still, these nickel and dime data used entries are adding up fast and bring us to the brink of our our plan which we have already extended.
    We do not stream video, we do not listen to Pandora.... The phone is used strictly for texts, emails, and phone calls.
    Any advice?  Thank you for your help.

    Same here - very similar.Times are close but not like yours.
    I've deleted the email accounts and re-added them to see if there was a looping email send issue that I've read about.
    Funny though, when I review the data use on the phone, "photo" is the app indicating using the most data. So I've turned off and uninstalled everything I can think is tied to that.

  • Can I use Acrobat 11to download pdf that only allows me to read, enter data, and print?

    I need to complete gov. forms available online as pdf.  I can review, enter data and print, but I cannot save to my computer or anywhere else.  I need to be able to save so I can return to the form and enter information as I access it.  Can this be done if I purchase Acrobat 11 and, if so, can I get the Standard edition?  I currently only have Reader.

    Yes, if you upgrade to at least Acrobat Standard you will be able to save. Since you can't save with Reader (assuming 11), the forms are probably XFA forms (created with LiveCycle Designer), which must be Reader-enabled in order to be saved with Reader. Reader 11 can save non-enabled AcroForms (forms created with Acrobat), but not non-enabled XFA forms. If you have a Reader version prior to 11, try using Reader 11 to see if it will save.
    If you can provide a link to one of the forms, I can tell you for sure what the minimum requirement for saving is.

  • Mobile Data was racking up enormous over-charges and nobody at Verizon can tell me why.

    I was using a "Jet Pack" mobile hotspot so I could access the internet on my laptop out and about and mainly to monitor traffic during my commute. My Jet Pack contract ended and coincidentally my cell phone was eligible for the next upgrade. So I upgraded to the Samsung Galaxy 3 with the intention of using the 'Mobile Hotspot" feature on the new phone. I assumed that this feature would be identical to the Jet Pack as far as my data usage. Note that I occasionally upload photos and videos from the Jet Pack and I am familiar with how much data is used when I upload a video. If I upload too many videos it will cause my data to exceed my plan allowances. I think I may have exceeded my data allowances 1 or 2 times during my contract period when I was using the Jet Pack. The times that I did exceed my data allowance were minimal over-charges less than $50. So off goes the Jet Pack and on comes the Samsung Galaxy 3 using the "Mobile Hotspot" feature on the phone instead of the Jet Pack. During my first billing cycle with my new phone that I barely knew how to use, I began to use the mobile hotspot in the same routine as my Jet Pack. I did upload a few videos knowing that I might exceed my  plan a "little bit"....maybe by 1 or 2 GB over.  I received notification that I had exceeded my allowance and I immediately called to see why. I was shocked to see that I had used up a full 10GB which is WAY more than I had ever used with my Jet Pack. So I thought to myself that maybe the videos I uploaded were too long and I just didn't realize it. So I asked for an additional 2GB of data to get me through the billing cycle. I was told that it would only cost $20 more. I immediately decreased my data usage down to just using the Mobile Hotspot feature only.....no video uploading for the remainder of the billing cycle. I was sure that I would be fine after adding the additional 2BG. Then after a couple weeks I receive my bill and I am being charged over 19GB !!!!  And my overcharges are $150. I was shocked. I immediately called and spent a long time on the phone with several different Verizon reps over the phone. I spoke to at least 2 managers as well. I was able to beg for about a $45 credit. They all were very sympathetic but would not budge on refunding me any more. I reviewed my usage on My Verizon and I am baffled how I am being charged all this data!!!  Nobody can give me any details on what I may have done to use up that much data. They can't even tell me what the data was used for. What was I doing or not doing that used up all this data?  I was reviewing my data usage and I was being charged data 24/7!!! I saw data being used at 3AM when I am asleep!!! HOW CAN THIS BE??? I am EXTREMELY unhappy with Verizon for allowing this to happen and FAILING to give me an accurate and detailed explanation of how this may have happened and what exactly was drawing all the data. I will not pay these over charges and I am going to find out how this happened if it takes me 1000 hours of research to figure out how this happened. This is maddening.

    And yet, if they did keep track of what their customers were looking at/doing online, TONS of people would be up in arms about it, just like the NSA stuff a couple months ago.
    You can't have it both ways. Either Verizon doesn't keep a record of what websites you visit, what videos you watch, etc, or they do keep a record and the government can search it. It's not about whether you have "something to hide", but rather about whether someone with an agenda THINKS you have something to hide.
    You are an adult, presumably, install software on your computer to monitor your traffic if you care about this information. Then you personally have the records.

  • How Do I View iTunes Library File's Data Intelligibly?

    Hello Everyone,
    I am trying to set up a library on a new PC, I have most of my audio files in itunes now and I have several old itunes metadata folders containing itl and xml files. What I want to do is import the playlists and data into my new itunes. When I do this I get a message telling me some songs are missing. I want to work out WHICH songs itunes thinks are missing so I can correct these playlists.
    It occurred to me that I could do this by manually reviewing the data in either of the old libarary's itl or xml files. I can't work out how to do this! I know nothing about xml and it just looks like code to me when opened in a web browser. I want to open them up and clearly view what they say are in the playlists.
    Can anybody help me with this?
    Thanks.

    Examining the XML file will not tell you what is missing. All that files will show is what tracks iTunes has in the playlist, and you can get that same listing in a format that's easier to read by using the Export Playlist command under the File -> Library menu.
    If you really want to work with the XML file, it's just a text file with special formatting. Any text editor capable of reading XML will display the file in a format that's somewhat more readable. But I know of nothing that would read the Track ID in the playlist section and automatically relate and fill in the track information. You'll have to do that yourself; the track ID and the applicable data (track name, artist, etc.) will be "higher" in the document and should be locatable with a normal search function.
    Regards.

  • Is there a limit to the number of columns in a spreadsheet to be used for Data Merge?

    I need 60 fields for each record - my test data of 6 records adds random breaks when exported as a txt file, causing it not to merge correctly and create many extra records with varying fields (about 214 records) when merged.
    I read somewhere it was 20 but this seems rather limiting! I did try reducing my document to 20 column records but I am still having issues with the txt file breaking the record up into mulitple record lines. I have removed the line breaks from the spreadsheet.

    I've done just over 40 before. If you are getting line breaks and/or bad importing it is usually due to something amiss in a cell. Usual culprit is line breaks in a cell. Missing headers or illegal characters in a header can cause a breakdown at import as well.
    I would recommend opening the csv in a good text editor to review the data for line breaks first thing. Oh, and I always use tab delimited as well. A misplaced or missing quote mark (iif used in the text of a cell can also cause an import failure.
    Each line in a good Excel export when opened should begin with the data in your first column. If not, that indicates a line break in some cell.
    Mike

  • Data Protector 6.11 File Names Purge - Recovery.

    Hi All;
    On reviewing our Data Protector today I am finding a lot of session, restore, and media files missing.
    I have found out that someone ran a cmd for DP:-
    Omnidbutil -purge -filenames
    Could this be a reason I am seeing missing information as above?
    The data and catalogue protection is 2 weeks, I cannot see any trace of the sessions/backup files, I am looking for. ON attempting to restore from the target, the expected drives aren't even listed and 75% of the resultant backup media is listing with blue question mark in 'devices and media I.e not found.
    Can anyone help/work with me on this please?
    What affect does this have on the IDB/CDB/DCBF?
    Thanks in advance for any assistance guys and gals.

    The main reason for reformatting the cache, was the merger of the cache implementations in Java Web Start and Java Plug-in. There is now one cache, and one implementation of downloading and caching code for the two products.
    The format and implementation is more similar to what was the previous plugin format, as that was faster and contained more features (cache size limit, ability to run w/o caching all together, and respecting more cache control and http header directives than the webstart implementation.)
    The requirement for an undeterminable filename was from the security group. A bug in older versions of IE allowed you to execute a file on the local filesystem that you knew of. ( a jar file is a form of an executable. )
    We feel that the cache is only that, a private cache for the java products.
    By making this change we recognize that code such as this that assumes it knows the cache's format and contents can no longer work unmodified, and that this will prevent code from being developed that does this in the future.
    There are several reasons why code has done this (including some in the JDIC project), but we think there are better mechanisms for achieving the objectives of such code, and are ready to help with what we understand may be a difficult transition for some.
    /Andy

  • Trying to pass xml data to a web service

    I'm working on an Apex application that is required to pass data to a web service for loading into another (non Oracle) system. The web service expects two stings as input, the first string is simply an ID, the second string is an xml document. I generate an xml 'string' using PL/SQL in an on-submit process prior to invoking the web service. If I pass regular text for the second parameter, the web service returns an error message which is stored in the response collection and displayed on the page as expected. When I pass the the xml data, I get a no data found error and the response collection is empty. I have tried this in our development environment on Apex 3.1.2 (database version 10.2). I also tried this on our Apex 4.0.2 sandbox (with the same Oracle 10.2 database). I have found that once I have nested xml, I get the no data found message, if I pass partial xml data, I get the error response from the web service. Perhaps I am not generating the xml correctly to pass to the web service (this only just occurred to me as I write this)? Or is there an issue passing xml data from Apex to the web service? Any help will be greatly appreciated! here is the code I use to generate the xml string:
    declare
      cursor build_data  is
        select u_catt_request_buid,u_catt_request_name,u_catt_cassette_buid,u_catt_cassette_name
          ,u_project_name,u_sub_project,replace(u_nominator,'ERROR ','') u_nominator
          ,replace(replace(u_going_to_vqc,'Yes','true'),'No','false') u_going_to_vqc
          ,u_promoter,u_cds,u_terminator
          ,u_primary_trait,u_source_mat_prvd,u_pro_resistance_1,u_vector_type
          ,nvl(u_my_priority,'Medium') u_my_priority
          ,replace(replace(u_immediate_trafo,'Yes','true'),'No','false') u_immediate_trafo
          ,replace(replace(u_new_bps_cmpnt,'Yes','true'),'No','false') u_new_bps_cmpnt
          ,u_compnt_name,u_new_cmpt_desc,initcap(u_target_crop) u_target_crop,u_corn_line
          ,u_plant_selection,u_num_of_ind_events,u_num_plants_per_event,u_molecular_quality_events
          ,replace(replace(u_field,'Yes','true'),'No','false') u_field
          ,u_t1_seed_request,u_potential_phenotype,u_submission_date
          ,u_sequence_length,u_trait,u_frst_parent,u_frst_parent_vshare_id,u_cds_vshare_id
          ,constructid,cassetteid,description
        from temp_constructs_lims
        order by constructid,description;
      v_xml_info         varchar2(350);
      v_xml_header       varchar2(1000);
      v_xml_data         clob;
      v_xml_footer       varchar2(50);
      v_create_date      varchar2(10);
      v_scientist_name   v_users.full_name%type;
      v_scientist_email  v_users.email_address%type;
      v_primas_code      construct.fkprimas%type;
      v_nominator_name   v_nominators.full_name%type;
      v_file_length      number;
    begin
      -- initialize variables
      v_create_date := to_char(sysdate,'YYYY-MM-DD');
      v_xml_data := null;
      -- get name and email address
      begin
        select full_name,email_address
        into v_scientist_name,v_scientist_email
        from v_users
        where ldap_account = :F140_USER_ID; 
      exception when no_data_found then
        v_scientist_name := '';
        v_scientist_email := '';
        v_scientist_name := 'Test, Christine';
        v_scientist_email := '[email protected]';
      end;
      -- set up xml file 
      if :OWNER like '%DEV%' then
        v_xml_info := '
          <?xml version="1.0" encoding="utf-8"?>
          <exchange xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                    xmlns="deService"
                    xsi:schemaLocation="deService http://mycompany.com/webservices/apexdataexchange/schemas/RTPCATT.xsd">
      else
        v_xml_info := '
          <?xml version="1.0" encoding="utf-8"?>
          <exchange xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                    xmlns="deService"
                    xsi:schemaLocation="deService http://mycompanyprod.com/webservices/apexdataexchange/schemas/RTPCATT.xsd">
      end if;
      -- populate xml header records
      v_xml_header := '<header xmlns="">
        <stdXmlVer>2.0</stdXmlVer>
        <sendingUnit>'||:P36_UNIT_NUMBER||'</sendingUnit>
        <sendingPerson>'||v_scientist_name||'</sendingPerson>
        <notification>'||v_scientist_email||'</notification>
        <creationDate>'||v_create_date||'</creationDate>
      </header>
      <entities xmlns="">
      for rec in build_data loop
        begin
          -- get primas code for current construct
          select fkprimas
          into v_primas_code
          from construct
          where constructid = rec.constructid;
        exception when no_data_found then
          v_primas_code := null;
        end;
        begin
          -- get nominator name for current construct
          select full_name
          into v_nominator_name
          from v_nominators
          where nominator_id = rec.u_nominator;
        exception
          when no_data_found then
            v_nominator_name := null;
          when invalid_number then
            v_nominator_name := catt_pkg.full_name_from_user_id(p_user_id => rec.u_nominator);
            v_nominator_name := 'Test, Christine';
        end;
        v_xml_data := v_xml_data||'
          <Construct>
          <requestBUID>'||rec.u_catt_request_buid||'</requestBUID>
          <requestName>'||rec.u_catt_request_name||'</requestName>
          <cassetteBUID>'||rec.u_catt_cassette_buid||'</cassetteBUID>
          <cassetteName>'||rec.u_catt_cassette_name||'</cassetteName>
          <scientist>'||v_scientist_name||'</scientist>
          <projectNumber>'||v_primas_code||'</projectNumber>
          <subProject>'||rec.u_sub_project||'</subProject>
          <comments>'||rec.description||'</comments>
          <nominator>'||v_nominator_name||'</nominator>
          <goingToVqc>'||rec.u_going_to_vqc||'</goingToVqc>
          <primaryTrait>'||rec.u_primary_trait||'</primaryTrait>
          <sourceMatPrvd>'||rec.u_source_mat_prvd||'</sourceMatPrvd>
          <prokaryoticResistance>'||rec.u_pro_resistance_1||'</prokaryoticResistance>
          <vectorType>'||rec.u_vector_type||'</vectorType>
          <priority>'||rec.u_my_priority||'</priority>
          <immediateTrafo>'||rec.u_immediate_trafo||'</immediateTrafo>
          <newComponent>'||rec.u_new_bps_cmpnt||'</newComponent>
          <componentName>'||rec.u_compnt_name||'</componentName>
          <newComponentDescription>'||rec.u_new_cmpt_desc||'</newComponentDescription>
          <targetCrop>'||rec.u_target_crop||'</targetCrop>
          <Line>'||rec.u_corn_line||'</Line>
          <plantSelection>'||rec.u_plant_selection||'</plantSelection>
          <numOfIndEvents>'||rec.u_num_of_ind_events||'</numOfIndEvents>
          <numOfPlantsPerEvent>'||rec.u_num_plants_per_event||'</numOfPlantsPerEvent>
          <molecularQualityEvents>'||rec.u_molecular_quality_events||'</molecularQualityEvents>
          <toField>'||rec.u_field||'</toField>
          <potentialPhenotype>'||rec.u_potential_phenotype||'</potentialPhenotype>
          </Construct>
      end loop;
      -- complete xml data   
      v_xml_footer := '
          </entities>
        </exchange>
      -- complete submission data
      :P36_XML_SUBMISSION := null;   
      :P36_XML_SUBMISSION := v_xml_info||v_xml_header||v_xml_data||v_xml_footer;
      :P36_XML_SUBMISSION := trim(:P36_XML_SUBMISSION);
    end;Here is an example of :P36_XML_SUBMISSION:
    <?xml version="1.0" encoding="utf-8"?> <exchange xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="deService" xsi:schemaLocation="deService http://mycompany.com/webservices/apexdataexchange/schemas/RTPCATT.xsd"> <header xmlns=""> <stdXmlVer>2.0</stdXmlVer> <sendingUnit>10</sendingUnit> <sendingPerson>Test, Christine</sendingPerson> <notification>[email protected]</notification> <creationDate>2011-12-20</creationDate> </header> <entities xmlns=""> <Construct> <requestBUID>150000123</requestBUID> <requestName>AA0000123</requestName> <cassetteBUID>160000123</cassetteBUID> <cassetteName>AB000123</cassetteName> <scientist>Test, Christine</scientist> <projectNumber>T000123</projectNumber> <subProject>Discovery Plus</subProject> <comments>AA0000123 From CATT on 20-DEC-11 </comments> <nominator>Test, Christine</nominator> <goingToVqc>true</goingToVqc> <primaryTrait>promoter::intron::transit:gene::terminator</primaryTrait> <sourceMatPrvd>seed - stuff</sourceMatPrvd> <prokaryoticResistance></prokaryoticResistance> <vectorType>Plant Vector</vectorType> <priority>Medium</priority> <immediateTrafo>true</immediateTrafo> <newComponent></newComponent> <componentName>gene</componentName> <newComponentDescription>unknown function; sequence has some similarity to others</newComponentDescription> <targetCrop>Crop</targetCrop> <Line>Inbred</Line> <plantSelection>smidge ver2</plantSelection> <numOfIndEvents>6</numOfIndEvents> <numOfPlantsPerEvent>1</numOfPlantsPerEvent> <molecularQualityEvents>NA</molecularQualityEvents> <toField>true</toField> <potentialPhenotype></potentialPhenotype> </Construct> </entities> </exchange>My application page is accessed by an action from another page. The user reviews the data in a sql report region. When the use clicks on the Upload (SUBMIT) button, the xml string is generated first and then the web service is invoked. I have tried passing a simple string as the second parameter ("dummydata") and partial data in the xml string ("<sendingPerson>Test, Christine</sendingPerson>") the web service returns this error in both cases:
    Error[Validate Data]: (XML) = Data at the root level is invalid. Line 1, position 1.. Cannot validate the XML! Data Exchange not accepted!Once I pass the entire xml string above, I get an Oracle-01403: no data found error. I have opened the web service in IE and pasted my xml input string and received a valid, verified result, so I am sure that the generated xml is correct. I have spoken with the web service developer; there are no log entries created by the web service when I submit the full xml string, so I suspect the failure is in the Apex application.
    Thanks,
    Christine
    I should add that once I have nested tags in the xml, I get the Oracle no data found error ("<header xmlns=""> <stdXmlVer>2.0</stdXmlVer> <sendingUnit>10</sendingUnit> </header>"). I f I do not have nested tags in the xml ("<notification>[email protected]</notification> <creationDate>2011-12-20</creationDate>"), I get the web service response (error).
    Edited by: ChristineD on Dec 20, 2011 9:54 AM

    Ok, I think I'm getting closer to thinking this all the way through. When I have used clobs in the past, I've always used the DBMS_CLOB package. I use this to create a temp clob and then make the above calls. I had to go find an example in my own code to remember all of this. So, here is another suggestion... feel free to disregard all the previous code snippets..
    declare
      cursor build_data  is
        select u_catt_request_buid,u_catt_request_name,u_catt_cassette_buid,u_catt_cassette_name
          ,u_project_name,u_sub_project,replace(u_nominator,'ERROR ','') u_nominator
          ,replace(replace(u_going_to_vqc,'Yes','true'),'No','false') u_going_to_vqc
          ,u_promoter,u_cds,u_terminator
          ,u_primary_trait,u_source_mat_prvd,u_pro_resistance_1,u_vector_type
          ,nvl(u_my_priority,'Medium') u_my_priority
          ,replace(replace(u_immediate_trafo,'Yes','true'),'No','false') u_immediate_trafo
          ,replace(replace(u_new_bps_cmpnt,'Yes','true'),'No','false') u_new_bps_cmpnt
          ,u_compnt_name,u_new_cmpt_desc,initcap(u_target_crop) u_target_crop,u_corn_line
          ,u_plant_selection,u_num_of_ind_events,u_num_plants_per_event,u_molecular_quality_events
          ,replace(replace(u_field,'Yes','true'),'No','false') u_field
          ,u_t1_seed_request,u_potential_phenotype,u_submission_date
          ,u_sequence_length,u_trait,u_frst_parent,u_frst_parent_vshare_id,u_cds_vshare_id
          ,constructid,cassetteid,description
        from temp_constructs_lims
        order by constructid,description;
      v_xml_info         varchar2(350);
      v_xml_header       varchar2(1000);
      v_xml_data         clob;
      v_xml_footer       varchar2(50);
      v_create_date      varchar2(10);
      v_scientist_name   v_users.full_name%type;
      v_scientist_email  v_users.email_address%type;
      v_primas_code      construct.fkprimas%type;
      v_nominator_name   v_nominators.full_name%type;
      v_file_length      number;
      v_xml_body    varchar2(32767); --added by AustinJ
      v_page_item    varchar2(32767);  --added by AustinJ
    begin
      -- initialize variables
      v_create_date := to_char(sysdate,'YYYY-MM-DD');
      --v_xml_data := null;   --commented out by AustinJ
      dbms_lob.createtemporary( v_xml_data, FALSE, dbms_lob.session );  --added by AustinJ
      dbms_lob.open( v_xml_data, dbms_lob.lob_readwrite );  --added by AustinJ
      -- get name and email address
      begin
        select full_name,email_address
        into v_scientist_name,v_scientist_email
        from v_users
        where ldap_account = :F140_USER_ID; 
      exception when no_data_found then
        v_scientist_name := '';
        v_scientist_email := '';
        v_scientist_name := 'Test, Christine';
        v_scientist_email := '[email protected]';
      end;
      -- set up xml file 
      if :OWNER like '%DEV%' then
        v_xml_info := '
          <?xml version="1.0" encoding="utf-8"?>
          <exchange xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                    xmlns="deService"
                    xsi:schemaLocation="deService http://mycompany.com/webservices/apexdataexchange/schemas/RTPCATT.xsd">
      else
        v_xml_info := '
          <?xml version="1.0" encoding="utf-8"?>
          <exchange xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                    xmlns="deService"
                    xsi:schemaLocation="deService http://mycompanyprod.com/webservices/apexdataexchange/schemas/RTPCATT.xsd">
      end if;
      -- populate xml header records
      v_xml_header := '<header xmlns="">
        <stdXmlVer>2.0</stdXmlVer>
        <sendingUnit>'||:P36_UNIT_NUMBER||'</sendingUnit>
        <sendingPerson>'||v_scientist_name||'</sendingPerson>
        <notification>'||v_scientist_email||'</notification>
        <creationDate>'||v_create_date||'</creationDate>
      </header>
      <entities xmlns="">
      for rec in build_data loop
        begin
          -- get primas code for current construct
          select fkprimas
          into v_primas_code
          from construct
          where constructid = rec.constructid;
        exception when no_data_found then
          v_primas_code := null;
        end;
        begin
          -- get nominator name for current construct
          select full_name
          into v_nominator_name
          from v_nominators
          where nominator_id = rec.u_nominator;
        exception
          when no_data_found then
            v_nominator_name := null;
          when invalid_number then
            v_nominator_name := catt_pkg.full_name_from_user_id(p_user_id => rec.u_nominator);
            v_nominator_name := 'Test, Christine';
        end;
        v_xml_body := '
          <Construct>
          <requestBUID>'||rec.u_catt_request_buid||'</requestBUID>
          <requestName>'||rec.u_catt_request_name||'</requestName>
          <cassetteBUID>'||rec.u_catt_cassette_buid||'</cassetteBUID>
          <cassetteName>'||rec.u_catt_cassette_name||'</cassetteName>
          <scientist>'||v_scientist_name||'</scientist>
          <projectNumber>'||v_primas_code||'</projectNumber>
          <subProject>'||rec.u_sub_project||'</subProject>
          <comments>'||rec.description||'</comments>
          <nominator>'||v_nominator_name||'</nominator>
          <goingToVqc>'||rec.u_going_to_vqc||'</goingToVqc>
          <primaryTrait>'||rec.u_primary_trait||'</primaryTrait>
          <sourceMatPrvd>'||rec.u_source_mat_prvd||'</sourceMatPrvd>
          <prokaryoticResistance>'||rec.u_pro_resistance_1||'</prokaryoticResistance>
          <vectorType>'||rec.u_vector_type||'</vectorType>
          <priority>'||rec.u_my_priority||'</priority>
          <immediateTrafo>'||rec.u_immediate_trafo||'</immediateTrafo>
          <newComponent>'||rec.u_new_bps_cmpnt||'</newComponent>
          <componentName>'||rec.u_compnt_name||'</componentName>
          <newComponentDescription>'||rec.u_new_cmpt_desc||'</newComponentDescription>
          <targetCrop>'||rec.u_target_crop||'</targetCrop>
          <Line>'||rec.u_corn_line||'</Line>
          <plantSelection>'||rec.u_plant_selection||'</plantSelection>
          <numOfIndEvents>'||rec.u_num_of_ind_events||'</numOfIndEvents>
          <numOfPlantsPerEvent>'||rec.u_num_plants_per_event||'</numOfPlantsPerEvent>
          <molecularQualityEvents>'||rec.u_molecular_quality_events||'</molecularQualityEvents>
          <toField>'||rec.u_field||'</toField>
          <potentialPhenotype>'||rec.u_potential_phenotype||'</potentialPhenotype>
          </Construct>
        ';    --modified by AustinJ
        dbms_lob.writeappend( v_xml_data, length(v_xml_body), v_xml_body);   --added by AustinJ
      end loop;
      -- complete xml data   
      v_xml_footer := '
          </entities>
        </exchange>
      -- complete submission data
      v_page_item := null;   
      v_page_item := v_xml_info||v_xml_header||wwv_flow.do_substitutions(wwv_flow_utilities.clob_to_varchar2(v_xml_data))||v_xml_footer;   --added by AustinJ
      :P36_XML_SUBMISSION := trim(v_page_item);   --added by AustinJ
        dbms_lob.close( v_xml_data);  --added by AustinJ
        if v_xml_data is not null then   
            dbms_lob.freetemporary(v_xml_data);   --added by AustinJ
        end if;  --added by AustinJ
    end;This code will use the Database to construct your clob and then convert it back to a varchar2 for output to your webservice. This makes more sense to me now and hopefully you can follow what the process is doing.
    You don't technically need the two varchar2(36767) variables. I used two for naming convention clarity sake. You could use just one multipurpose variable instead.
    If you have any questions, just ask. I'll help if I can.
    Austin
    Edited by: AustinJ on Dec 20, 2011 12:17 PM
    Fixed spelling mistakes.

  • How to insert data from file to table??

    I need to know that how can i insert data in multiple column through file. I can simply insert data in one column table but couldnt find out the way to put data in all column.
    My data store in a file
    ************************************************text.txt***************
    133, shailendra, nagina, 14/H, 45637, 9156729863
    **************************************************************my_data(table)**********
    trying to insert into below table...
    id, name, last_name, add, pin. mob
    Let me know if anything else needed..:)

    Hi Shailendra,
    Actually, in SQL Developer, you can open a connection to the target schema, right-click on the Tables node in the navigator tree view, select Import Data, then use the Data Import Wizard. It is extremely flexible. It looks like you have a comma separated variable file, so if you select Format: csv and Import Method: insert it will probably work just fine.
    To minimize the chance of errors during import, pick a preview limit value so the wizard can examine the size and data type of all columns in as many data rows as possible, then review the data type/size for each column in the next wizard page and override as necessary. For date columns it is also important to choose the appropriate format mask.
    Hope this helps,
    Gary
    SQL Developer Team

  • Data mapping to SAP B1 using SDK

    is there any way around to do the data mapping from other system database to SAP B1
    database without using field to field mapping?
    ex: we want to migrate from other system to B1 we need to transfer their old data to a new one..and of course the database structure is not the same..and it would be a lot of task if we migrate their old PO to B1 (i'm thinking using field to field mapping so we must know the data structure from the old system and provide the information needed when we want migrate it to B1)
    is there any thoughts?
    hope u understand what i mean
    PS: all the migrate things is using SDK
    thanks,
    erick

    Hi Erick,
    I think, using DI API should be better.
    I know that, all data in the database is not supported.
    But, most simple idea is,
    1. Get objects from old company database
    2. review the data by SDK (validating and user field mapping)
    3. add modified objects to new company database
    This should be most same, and you can process many process automatically in process 2.
    Hope this helpful for you.
    Hyunil Choi.

  • How to transfer the data in navigation view?

    Hi Gurus,
    I have done the webdynpro development which has the floor plan manager road map.
    So after user enter the value in the UI then can review the data by pressing review button and data will get transfer to next screen where user will able to see the data that they have entered in the first screen .
    I my requirement I got three views edit view /Old field selection view /Field selection view.
    I have created the Webdynpro application which will freeze the value of duration as half of the work schedule allocated to the employee once they choose half day as a type of absence.
    In my application Duration field is getting populated with the half of the workschedule (say 4 hours (8full work schedule/2))allocated to the employee .
    But once I pressed the Review button duration field again reffering the full work schedule (say 8 hours) in the next screen.
    How to map my changes with next screen as well.
    Thanks in Advance,
    Dharani

    Hi Gurus,
    My application is based on Floor Plan manager which has the road map( review data,send data and confirm data)
    requirement is what ever the changes i made it in the intial screen should pass on the same value to the next screen which is review screen
    and then it has to pass to Send Data screen..........
    All the above said review data send data and confirm data are buttons which is available in the screen once I deployed the webdynpro application.
    But In the layout there is no buttons defined in the view? iam really confused?
    Please help me this is very urgent requirement............
    Thanks in Advance,
    Dharani

  • Process Chain Red 'X', Exec Infopckg Yellow, Infopckg monitor/data correct

    Dear and respectable colleagues of the forum,
    I am experiencing a problem in a process chain when executing infopackages. The process chain has 6 "Execute infopackage" processes. Sometimes (1 of 4 attempts average) my process chain ended with red color with 'X' status in the Process Chain Display Log View.
    Reviewing within the chain Log, there is one of the "Execute infopackage" processes that appears in yellow color (it loads from direct update ODS to an infocube). However, revising the Logs for its corresponding Infopackage execution I found that it says that "Data successfully updated". Moreover, If I review the data loaded it seems to have been loaded correctly.
    Then, Why does the "Execute infopackage" process in PC finishes with yellow color if the Infopackage execution itself finishes correclty?
    To complete the whole scenario: I execute this process chain every night from an Abap program; the program has a Loop that traverses a table, and executes the process chain as many times as the quantity of rows in that table (table has 10 rows meaning offices). As you can understand, the chain is executed 10 times, and the error occurs in any of the 6 "Execute infopackage" processes indistinctly.
    I review ST22 and SM37 for all users for the datetime when the error occured butnothing bizarre was reported. It is left to review SM21.
    The following is the detail of messages.
    Logs for Execute Infopackage
       Data successfully updated
    Monitor of Infopackage
      Details Tab
         Requests (messages): Everything OK      
         Extraction (messages): Everything OK
         Transfer (IDocs and TRFC): Everything OK
         Processing (data packet): Everything OK
           Data Package 1 ( 45 Records ) : Everything OK
             Transfer Rules ( 45  -> 45  Records ) : No errors
             Update rules ( 45  -> 45  Records ) : No errors
             Update ( 45  new / 0 changed ) : No errors
             Processing end : No errors
         Process Chains : Errors occurred
           Llena cubos de carteras vtas sec por distrib - autom

    Dear s v desh,
    I am on SAP BW 3.5, I already checked the cube and it is green - all right.
    The message in the infopackage excution node in the process chain says (non-error):
      "Data successfully updated"
    The error message in process chain monitor (log) says:
    ......"Overall status: Error occurred: or: Missing messages
    ............Process Chains : Errors occurred"
    I just realized that there is no messages at all in the infopackage monitor (log).
    I have received a clue, someone asked me to review note 1396417, and I will.
    Regards
    Juan Alonso Teevin

Maybe you are looking for