Trailing Minus in Data file causing issues!

Hello Everyone,
I think there should be an easy solution to this but I can't figure out the syntax to make it work.  My problem is in my data file some of my Amounts are in the file as 6785.28-.  When I try to load the data file BPC does not like the trailing minus sign.  If I open the file in Excel and use the text to Columns tool it fixes the amounts to be -6785.28.  Is there anyway to accomplish this in BPC so users will not have to open the data file in excel and then resave it?
Thanks in advance!
Nick

Yes this can be done with transformation file.
For amount column you have to do a transaformation where the last sign from string if it is - should be in front of string. I think can be done quite easy.
Regards
Sorin Radulescu

Similar Messages

  • RAC-DATA FILE ACCESSING ISSUE FROM ONE NODE

    Dear All,
    We have a two node RAC (10.2.0.3)running on Hp Unix. From yesterday onwards, from one instance accessing data from a specific data file showing the below error, whereas accessing from other node to the same datafile is working properly.
    Errors in file /oracle/product/admin/tap3plus/bdump/tap3plus4_dbw0_24950.trc:
    ORA-01157: cannot identify/lock data file 75 - see DBWR trace file
    ORA-01110: data file 75: '/dev/vg_rac/rraw_tap3plus_temp_live05'
    ORA-27041: unable to open file
    HPUX-ia64 Error: 19: No such device
    Additional information: 2
    Tue Jan 31 08:52:09 2012
    Errors in file /oracle/product/admin/tap3plus/bdump/tap3plus4_dbw0_24950.trc:
    ORA-01186: file 75 failed verification tests
    ORA-01157: cannot identify/lock data file 75 - see DBWR trace file
    ORA-01110: data file 75: '/dev/vg_rac/rraw_tap3plus_temp_live05'
    Tue Jan 31 08:52:09 2012
    File 75 not verified due to error ORA-01157
    Tue Jan 31 08:52:09 2012
    Thanks in Advance

    user585870 wrote:
    We have a two node RAC (10.2.0.3)running on Hp Unix. From yesterday onwards, from one instance accessing data from a specific data file showing the below error, whereas accessing from other node to the same datafile is working properly.That would be due to some kind of failure in the shared storage layer.
    RAC needs the very same storage layer to be visible and available on each RAC node - thus this needs to be some form of shared cluster storage.
    Should a piece of it fails on one node, that node would not be able to access the RAC database files on that shared storage layer - and will throw the type of errors you are seeing.
    So how does this shared storage layer look like? Fibre channels (HBAs) connected to a Fibre Channel Switch and SAN - making SAN LUNs available as shared storage devices?
    Typically a shared storage failure would throw errors in the kernel log. This is because the error is not an Oracle error, but a kernel error. As it is in your case. The bottom error on the error stack points to the root cause:
    ORA-01157: cannot identify/lock data file 75 - see DBWR trace file
    ORA-01110: data file 75: '/dev/vg_rac/rraw_tap3plus_temp_live05'
    ORA-27041: unable to open file
    HPUX-ia64 Error: 19: No such device
    So HP-UX on that node is not seeing a specific shared storage device.

  • How to advoid Pdf attached files turning into winmail.dat file causing incovinience opening them? And why does the phone automatically changes the file???

    The first time i opened the mail, the mail loaded the attactment as pdf file and i was able to open it without any problems. But then when i tried to open it again, i watched the phone automatically turned the pdf file into winmail.dat without any of my control. Since then, i werent able to open the file anymore! And this was lucky that i was able to see it as pdf file - all other attachments i have recieved since using my new ip6 all turn into winmail.dat
    this is really incovinient and frustrating for work. i cant believe i cannot just open a pdf file as a pdf file on my ip6 in the emails i recieve. I was on ip4 before this and never have anything like this happened.
    Solution needed asap please.
    Thankyou!

    Hi Jnkm,
    I'm sorry to hear you are having issues with your new iPhone 6. If you are getting a winmail.dat file instead of a PDF, you may want to see if it may be the issue noted in the following article:
    iPhone, iPad, or iPod touch does not display attachment in email - Apple Support
    Regards,
    - Brenden

  • Acrobat Pro 8 - Linking files causes issue

    We create projects that contain multiple PDFs (the number varies).  For each project, a table of contents (TOC) is created.  The bookmarks within the TOC are links to other PDFs.  The issue we've begun experiencing is that after linking to a file, when previewing the document (via clicking the Pages option), one or more pages appear blank.  Acrobat crashes at this point.  When opened, the linked file will open, but contains blank pages.  We then have to replace the pages with the source, which works fine. 
    This issue results in the process taking several times longer than it should.  The process the user uses is pasted below.  I was unable to find a similar issue via Google or searching the Adobe forums.  Any help will be appreciated.  Thank you.
    Inserting Files into the Bookmark Tabs
    ALWAYS START FROM THE TABLE OF CONTENTS (TOC)
    1. CTRL + SHIFT + D (delete all but 1 page of the TOC)
    2. CTRL + SHIFT + I (insert the pdf file associated with the tab)
    3. CTRL + SHIFT + D (delete the rest of the TOC – always #1)
    4. CTRL + SHIFT + S (Save as: select the pdf file you inserted)
    5. On the tab/Bookmark you're working on
    a. Right click – Left Click
    b. CTRL + C (to copy all the text in this Bookmark)
    c. CTRL + D (Opens Document Properties)
    d. Tab (the current title will become highlighted)
    e. CTRL + V (to paste the text you just copied)
    f. Press OK
    g. CTRL + S (save)
    6. While still on the tab/Bookmark you just added
    a. Click on the Pages tab (to the left)
    Errors:

    Thank you.
    Adobe Acrobat Professional v8.0.0
    Windows 7 Professional SP 1

  • Custom Page layout and related tmp files causing issues

    Hello,
    I am trying to create custom page layouts using Design Manager and then editing the html file using a text editor.
    I have created a content type (inherited from Page content type) and used this content type for my page layout. I used the snippet manager to add relevant webpart zones and snippets I needed and published the page layout. Under status it shows conversion
    successful but I can see another file with similar name to my page laout but with ~RF119c862.TMP being created. 
    Even if I publish my page layout my webparts (via snippets I added) doesn't reflect in the page. I checked the preview of my page layouts and webparts appear when it is in draft mode but the moment I publish it loses all snippets (webparts). 
    I checked the html even after the page layout is published and I can see all webpart zones, my divs and snippet code there, somehow once I publish it the corresponding aspx page doesn't get updated.
    Has anybody come across this before, I'm trying this on Office 365 tenant site.
    Regards,
    Manoj
    -- The opinions expressed here represent my own and not those of anybody else -- http://manojvnair.blogspot.com

    I have seen a similar problem on-prem when a client had distributed cache misconfigured, but as your problem is with SPO, I doubt that is the problem.
    Try creating another page layout but keep its associated content type untouched. Try adding snippets and such, publish and see what happens. If that works, then there must be something wrong with your custom content type.
    Also, when you save a change to your .html page layout, open its associated aspx page layout in SPD and see if your changes were reflected there as well. As soon as your edit a html page layout and save your changes, SharePoint should automatically update
    your aspx page layout right away.
    Eric Overfield - PixelMill -
    blog.pixelmill.com/ericoverfield -
    @EricOverfield

  • DATA FILE space issue

    Hello All,
    I have a call regarding the datafile growth.
    Here 1 datafile sit on H drive which is having 9 mb of disk space next growth of the datafile will be 60 mb  with autogrow on with no enough free space to grow..
    Remaining 3 datafile sits on d drive which is having enough free space for the growth of datafile.
    Do I need to request to increase the disk size for the H drive. since i do have the enough disk size for the D drive.
    Regards
    shathishkumar

    Hi Sathis,
    First find top size tables in the system by DBACOCKPIT Transaction then check how to optimize/recover the space in that drive. for example, if BALDAT/BALHDR is in huge size delete expired application logs through Transaction SLG2. if other BASIS tables are in huge size, Please  check  house keeping jobs in the following notes and schedule missed jobs in your system to avoid the DB growth.
     Note 1411877 - New standard jobs
     Note 1440439 - New Standard Jobs (2)
     Note 16083 - Standard jobs, reorganization jobs
     Note 1034532 - Changes for standard jobs
    Once you have done clenup old/expired logs in the system, shrink the DB as suggested by S Sriram.
    you can do whole database compression as well but do it when system load is low.
    If other drive has enough space, you can detach and attach datafile to that drive but you need downtime for this.
    Thanks and regards,
    Pradeep

  • Issues while generating Schema DAT files

    We are facing two type of issues when generating Schema ".dat" files from Informix Database on Solaris OS using the
    "IDS9_DSML_SCRIPT.sh " file.
    We are executing the command on SOLARIS pormpt as follows..
    "IDS9_DSML_SCRIPT.sh <DBName> <DB Server Name> ".
    The first issue is ,after the command is excuted ,while generating the ".dat" files the following error is occuring .This error is occuring for many tables
    19834: Error in unload due to invalid data : row number 1.
    Error in line 1
    Near character position 54
    Database closed.
    This happens randomly for some schemas .So we again shift the script to a different folder in Unix and execute it.
    Can we get the solution for avoiding this error.
    2. The second issue is as follows..
    When the ".dat" files are generated without any errors using the script ,these .dat files are provided to the OMWB tool to load the Source Model.
    The issue here is sometimes OMWB is not able to complete the process of creating the Source Model from the .dat files and gets stuck.
    Sometimes the tables are loaded ,but with wrong names.
    For example the Dat files is having the table name as s/ysmenus for the sysmenus table name.
    and when loaded to oracle the table is created with the name s_ysmenus.
    Based on the analysis and understanding this error is occuring due to the "Delimiter".
    For example this is the snippet from a .dat file generated from the IDS9_DSML_SCRIPT.sh script.The table name sysprocauthy is generated as s\ysprocauthy.
    In Oracle this table is created with the name s_ysprocauthy.
    s\ysprocauthy║yinformixy║y4194387y║y19y║y69y║y4y║y2y║y0y║y2005-03-31y║y65537y║yT
    y║yRy║yy║y16y║y16y║y0y║yy║yy║y╤y
    Thanks & Regards
    Ramanathan KrishnaMurthy

    Hello Rajesh,
    Thanks for your prompt reply. Please find my findings below:
    *) Have there been any changes in the extractor logic causing it to fail before the write out to file, since the last time you executed it successfully? - I am executing only the standard extractors out of the extractor kit so assumbly this shouldnt be a issue.
    *) Can this be an issue with changed authorizations? - I will check this today, bt again this does not seem to be possible as the same object for a different test project i created executed fine and a file was created.
    *) Has the export folder been locked or write protected at the OS level? Have the network settings (if its a virtual directory) changed? - Does not seem so because of the above reason.
    I will do some analysis today and revert back for your help.
    Regards
    Gundeep

  • Any one noticed issues when UCM contributor data files indexing in GSA

    Hi Guys,
    We are using Google search appliance to crawl UCM content (native documents).
    We don't have any issues with search results in this way. We are using dynamic converters to convert these documents into HTML in site studio web sites.
    But we have plans to move to site studio contributor data files (XML format).
    From your experience, any one noticed any issues in search results with UCM site studio contributor data files indexed by GSA.
    Thank you in advance.
    Edited by: 958795 on Oct 8, 2012 10:50 AM

    Hi Don,
    Thanks for the reply. I would discard the first one, because
    I already built the whole site using XML Data Sets, and the idea
    from the start was to use my own Atom feed to update the site. But
    the second one seems like a good choice, but I'm a bit puzzled. I
    was already using Spry:content for the pages that don't index
    correctly, putting them on an empty <span> tag so that they
    hid unloaded references... could you elaborate on that second
    choice, then, please?
    Here's a sample of the code:
    <!--start main content-->
    <div id="secondary-content" spry:detailregion="dsBase"
    class="wrapper">
    <div id="leftnav" class="frontpage">
    <div spry:state="loading">Loading content. Please
    wait...</div>
    <ul spry:repeatchildren="dsBase">
    <li class="Frontpage"><span
    spry:content="{title}"></span></li>
    <li class="subtitleFront">Posted on <span
    spry:content="{simpleDate}"></span></li>
    <li class="post"><span
    spry:content="{content}"></span></li>
    </ul>
    <p align="left"><br />
    Click <a href="archive.html">here</a> to see
    older posts.</p>
    </div>
    <div id="content-right" class="frontpage">
    <div class="wrapper">
    <h4 align="center">Featured art :</h4>
    <p align="center"> </p>
    <p align="center"><a
    href="gallery.html?row=4"><img
    src="images/tns/tn-AgainstAllOdds.gif" alt="Against All Odds"
    width="81" height="160"/></a></p>
    <p align="center">&quot;Against All
    Odds&quot;<br />
    Tobías Bartolomé</p>
    <p align="center"><a
    href="gallery.html?row=4">See more art at the
    gallery!</a></p>
    </div>
    </div>
    <div style="clear: both"></div>
    </div>
    And here's the link for the main page and Google's index for
    the site:
    http://www.cosmicollective.org/
    http://www.google.com/search?q=site:www.cosmicollective.org&hl=en
    Thanks again!
    Tomas

  • What causes Outlook error 0x8004010f "data file cannot be accessed"?

    any indexing running on the machine? is .pst in the exclusion list?

    Hi all, got an interesting one here. A reoccurring problem for one of my clients on all 8 of their desktops when Outlook 2010 comes up with this error. Normally about one machine a month will do it. I know the solution and can repair it pretty quick these days but the manager is now asking why it keeps happening and I haven't got a clue.We've got 8 fairly recent desktops, Windows 7, all built from brand new good quality hardware. Outlook 2010 on each machine has one IMAP account setup. We've got nothing fancy here, the only group policies used are for mapped drives and printers, everything else is very much standard, no 'addins' or integrations with Outlook.The error states the data file cannot be accessed, the file is there and accessible, ScanPST doesn't find any errors, it has not moved, but for some reason Outlook has created an...
    This topic first appeared in the Spiceworks Community

  • Outlook 2010 PST Import issue: ".pst is not an Outlook data file (.pst)"

    I recently installed the Office 2010 Beta and am attempting to import my old .PST files from Outlook 2007.  However, every time I try it I get the error, ".pst is not an Outlook data file (.pst)".  The two data files are large (2+ GB in size each) but have been working fine with Outlook 2007.  Steps to repro are:
    1.       In Outlook 2010, click the File tab.
    2.       Click Open and then click Open Outlook Data File.
    3.       Navigate to the .pst file to import and then click Ok.
    When I click "open Outlook data file" I get the error "the file Outlook.pst is not an outlook data file (.pst). 
    I've seen at least one other person post this question on another forum...  Any help you can provide would be fantastic as I have a lot of history (obviously ;-) tied up those files.
    Thanks!

    Hi TylerThrasher,
    I'm having the same problem, you described above about my pst file  "not an outlook personal
    folder file (.pst)..".
    Tried all the solutions metionned in this forum but no way unfortunately.
    The last thing that I didn't try yet is this recovery tool "Advanced
    Outlook Repair 3.4"
    which I'm not able to find on internet at all !!!
    Really
    I need your help as this pst file, will save my life... I got fired by my company for a no sens problem. Now I will start a justice process and all my evidences are inside my emails which are blocked inside this file.
    Really, will appreciate your help if you can as you already solved your problem.
    Thanks you in advance
    Loubna

  • Field in data file exceeds maximum length

    Hi,
    I am trying to run the following SQL*Loader control job on my Oracle 11gR2 . Running the SQL*Loader control job results in the ‘Field in data file exceeds maximum length’ error message. Below, I am listing the control file.Please Suggest. Thanks
    It's giving me an error when I run SQL Loader on it,
    Record 61: Rejected - Error on table RMS_TABLE, column GEOM.SDO_POINT.X.
    Field in data file exceeds maximum length.
    Here is my SQL Loader Control file,
    LOAD DATA
    INFILE *
    TRUNCATE
    CONTINUEIF NEXT(1:1) = '#'
    INTO TABLE RMS_TABLE
    FIELDS TERMINATED BY '|'
    TRAILING NULLCOLS (
       Status NULLIF Status = BLANKS,
       Score,
       Match_type NULLIF Match_type = BLANKS,
       Match_addr NULLIF Match_addr = BLANKS,
       Side NULLIF Side = BLANKS,
       User_fld NULLIF User_fld = BLANKS,
       Addr_type NULLIF Addr_type = BLANKS,
       ARC_Street NULLIF ARC_Street = BLANKS,
       ARC_City NULLIF ARC_City = BLANKS,
       ARC_State NULLIF ARC_State = BLANKS,
       ARC_ZIP NULLIF ARC_ZIP = BLANKS,
       INCIDENT_N NULLIF INCIDENT_N = BLANKS,
       CDATE NULLIF CDATE = BLANKS,
       CTIME NULLIF CTIME = BLANKS,
       DISTRICT NULLIF DISTRICT = BLANKS,
    LOCATION
    NULLIF LOCATION = BLANKS,
       MAPLOCATIO
    NULLIF MAPLOCATIO = BLANKS,
       LOCATION_T
    NULLIF LOCATION_T = BLANKS,
       DAYCODE
    NULLIF DAYCODE = BLANKS,
       CAUSE
    NULLIF CAUSE = BLANKS,
       GEOM COLUMN OBJECT
         SDO_GTYPE       INTEGER EXTERNAL,
         SDO_POINT COLUMN OBJECT
           (X            FLOAT EXTERNAL,
            Y            FLOAT EXTERNAL)
    CREATE TABLE RMS_TABLE (
      Status VARCHAR2(1),
      Score NUMBER,
      Match_type VARCHAR2(2),
      Match_addr VARCHAR2(120),
      Side VARCHAR2(1),
      User_fld VARCHAR2(120),
      Addr_type VARCHAR2(20),
      ARC_Street VARCHAR2(100),
      ARC_City VARCHAR2(40),
      ARC_State VARCHAR2(20),
      ARC_ZIP VARCHAR2(10),
      INCIDENT_N VARCHAR2(9),
      CDATE VARCHAR2(10),
      CTIME VARCHAR2(8),
      DISTRICT VARCHAR2(4),
      LOCATION VARCHAR2(128),
      MAPLOCATIO VARCHAR2(100),
      LOCATION_T VARCHAR2(42),
      DAYCODE VARCHAR2(1),
      CAUSE VARCHAR2(17),
      GEOM MDSYS.SDO_GEOMETRY);

    Hi,
    Looks like you have a problem with record 61 in your data file. Can you please post it in reply.
    Regards
    Ivan

  • Essbase Data Export not Overwriting existing data file

    We have an ODI interface in our environment which is used to export the data from Essbase apps to text files using Data export calc scripts and then we load those text files in a relational database. Laetely we are seeing some issue where the Data Export calc script is not overwriting the file and is just appending the new data to the existing file.
    The OverWriteFile option is set to ON.
    SET DATAEXPORTOPTIONS {
         DataExportLevel "Level0";
         DataExportOverWriteFile ON;     
    DataExportDimHeader ON;
         DataExportColHeader "Period";
         DataExportDynamicCalc ON;
    The "Scenario" variable is a substitution variable which is set during the runtime. We are trying to extract "Budget" but the calc script is not clearing the "Actual" scenario from the text file which was the scenario that was extracted earlier. Its like after the execution of the calc script, the file contains both "Actual" and "Budget" data. We are not able to find the root cause as in why this might be happening and why OVERWRITEFILE command is not being taken into account by the data export calc script.
    We have also deleted the text data file to make sure there are no temporary files on the server or anything. But when we ran the data export directly from Essbase again, then again the file contained both "Actual" as well as "Budget" data which really strange. We have never encountered an issue like this before.
    Any suggestions regarding this issue?

    Did some more testing and pretty much zoomed on the issue. Our Scenario is actually something like this "Q1FCST-Budget", "Q2FCST-Budget" etc
    This is the reason why we need to use a member function because Calc Script reads "&ODI_SCENARIO" (which is set to Q2FCST-Budget) as a number and gives an error. To convert this value to a string we are using @member function. And, this seems the root cause of the issue. The ODI_Scenario variable is set to "Q2FCST-Budget", but when we run the script with this calculation function @member("&ODI_SCENARIO"), the data file brings back the values for "Q1FCST-Budget" out of nowhere in addition to "Q2FCST-Budget" data which we are trying to extract.
    Successful Test Case 1:
    1) Put Scenario "Q2FCST-Budget" in hard coded letters in Script and ran the script
    e.g "Q2FCST-Phased"
    2) Ran the Script
    3) Result Ok.Script overwrote the file with Q2FCST-Budget data
    Successful Case 2:
    1) Put scenario in @member function
    e.g. @member("Q2FCST-Budget")
    2) Results again ok
    Failed Case:
    1) Deleted the file
    2) Put scenario in a substitution variable and used the member function "@member("&ODI_Scenario") and Ran the script . *ODI_SCENARIO is set to Q@FCST-Budget in Essbase variables.
    e.g. @member("&ODI_SCENARIO")
    3) Result : Script contained both "Q1FCST-Budget" as well as "Q2FCST-Budget" data values in the text file.
    We are still not close to the root cause and why is this issue happening. Putting the sub var in the member function changes the complete picture and gives us inaccurate results.
    Any clues anyone?

  • Timeouts increased after we moved USR, SAP data files and TLogs to new SAN

    We are having issues with timeouts after we moved our USR, SAP SQL Datafiles and SAP Transaction Logs from our old SAN to a new SAN.
    Timeouts for SAPGUI users are set to 10 minutes.
    We are running Windows Server 2003 with SQL Server 2005.
    The SAP database has 8 datafiles with a total size of about 350GB.
    Procedure we used to move SAP to new SAN:
    1. Attached 3 new SAN Volumes
         -a. USR
         -b. Data Files
         -c. Transaction Logs
    2. Shutdown SAP and SQL services
    3. Alligned the new volumes with a 1024kb offset and gave the Data files and Transaction log volumes a 64kb allocation
        size. (The alignment and 64kb allocation size were not setup for these volumes on the old SAN)
    4. Copied the 3 volumes from old to new.
    5. Changed the new volumes drive letter to the drive letters of the old volumes.
         -a. I had to restart in order to change the USR volume.
         -b. Because of this I had to resetup the sapmnt and saploc shares.
    6. Started SQL services and then SAP services and everything came up just fine.
    The week before we had anywhere from 1 to 9 timeouts per day.
    This week: Monday had 20 and Tuesday had 26.
    On Monday we saw that MD07 was the only transaction that was timing out, but Tuesday had others as well.
    The amount of users in the system is about the same.  The amount of orders going in are about the same.  No big transports went in right before we switched.
    Performance counters that I know about for disk look a lot better on the Data Files.
    - PAGEIOLATCH_SH ms/request is about 50% better
    - Under I/O Performance in DBACOCKPIT:
      - MS/OP is now anywhere from 5 to 30 - Old SAN: 50 to 300
    - The Hit Ratio is over 99% - same as the old SAN
    Looking at Wiley Introscope graphs:
    - The "SAP Host: Average queue length" is about 30% to 40% lower then the old SAN.
    - the "SAP Host: Disk utilization in %" is about the same.
    Questions:
    1. Did we do anything wrong or miss anything with our move procedure?
         a. Do we have to do anything in SQL since we changed volumes even though we kept the drive letters the same?
    2. What other logs or performance counters should I be looking at?
    Thank you,
    Neil

    Our new SAN Vendor is Compellent.  They have been fantastic.  I would highly recommend checking them out.
    The reasons for the timeouts had nothing to do with the SAN...Well kind of anyway.
    I decided to check t-code SM20 to see what users were doing when these timeouts were happening.  What I found was the program R_BAPI_NETWORK_MAINTAIN was being called thousands of times in a matter of 10 to 15 minutes at random times through out the day.  It would take up about 50 to 80 percent of the amount of programs being executed during these times.
    So, I sent this information to our developers and they found out that R_BAPI_NETWORK_MAINTAIN was being called from another program that was looping thousands of times. The trigger to stop the loop wasn't happening fast enough.  They made a change and we haven't seen the timeouts since.
    I think that the performance increases allowed the loop to run faster which caused the slow downs and timeouts to happen more often.
    Thank you to everyone for their help!
    Neil

  • Cannot open your default e-mail folders. You must connect to Microsoft Exchange with the current profile before you can synchronize your folders with your Outlook data file (.ost)

    Fresh installation of Exchange Server 2013 on Windows Server 2012.
    Our first test account cannot access their email via Outlook but can access fine through OWA. The following message appears - "Cannot open your default e-mail folders. You must connect to Microsoft Exchange with the current profile before you can synchronize
    your folders with your Outlook data file (.ost)" is displayed.
    If I turn off cached Exchange mode, setting the email account to not
    cache does not resolve the issue and i get a new error message - "Cannot open your default e-mail folders. The file (path\profile name).ost is not an Outlook data file (.ost). Very odd since it creates its own .ost file when you run it for the first
    time.
    I cleared the appdata local Outlook folder and I tested on a new laptop that has never connected to Outlook, same error message on any system.
    Microsoft Exchange RPC Client Access service is running.
    No warning, error or critical messages in the eventlog, it's like the healthiest server alive.
    Any help would be greatly appreciated. I haven't encountered this issue with previous versions of Exchange.

    So it looks like a lot of people are having this issue and seeing how Exchange 2013 is still new (relatively to the world) there isn't much data around to answer this. I've spend ALOT of time trying to figure this out.
    Here is the answer. :) - No I don't know all but I'm going to try to give you the most reasonable answer to this issue, in a most logical way.
    First thing I did when I was troubleshooting this issue is that I ignored Martina Miskovic's suggestion for Step4 http://technet.microsoft.com/library/jj218640(EXCHG.150)because it didn't make sense to me because I was trying to connect
    Outlook not outside the LAN but actually inside. However, Martina's suggestion does fix the issue if it's applied in the correct context.
    This is where the plot thickens (it's stew). She failed to mention that things like SSL (which I configure practically useless - anyone who ever worked in a business environment where the owner pretty much trusts anyone in the company, otherwise they don't
    work there - very good business practice in my eyes btw, can confirm that...) are some sort of fetish with Microsoft lately. Exchange 2013 was no exception.
    In exchange 2003, exchange 2007 and exchange 2010 - you could install it and then go to outlook and set it up. And when outlook manual Microsoft Exchange profile would ask you for server name, you would give it and give the name of the person who you setting
    up - as long as machine is on the domain, not much more is needed. IT JUST WORKS! :) What a concept, if the person already on premises of the business - GIVE HIM ACCESS. I guess that was too logical for Microsoft. Now if you're off premises you can use things
    like OutlookAnywhere - which I might add had their place under that scenario.
    In Exchange 2013, the world changed. Ofcourse Microsoft doesn't feel like telling it in a plain english to people - I'm sure there is an article somewhere but I didn't find it. Exchange 2013 does not support direct configuration of Outlook like all of it's
    previous versions. Did you jaw drop? Mine did when I realized it. So now when you are asked for your server name in manual outlook set up and you give it Exchange2013.yourdomain.local - it says cannot connect to it. This happens because ALL - INTERNAL AND
    EXTERNAL connection are now handled via OutlookAnywhere. You can't even disable that feature and have it function the reasonable way.
    So now the question still remains - how do you configure outlook. Well under server properties there is this nice section called Outlook anywhere. You have a chance to configure it's External and Internal address. This is another thing that should be logical
    but it didn't work that way for me. When I configured the external address different from the internal - it didn't work. So I strongly suggest you get it working with the same internal address first and then ponder how you want to make it work for the outside
    users.
    Now that you have this set up you have to go to virtual directories and configure the external and internal address there - this is actually what the Step 4 that Martina was refering to has you do.
    Both external and internal address are now the same and you think you can configure your outlook manually - think again. One of the most lovely features of Outlook Anywhere, and the reason why I had never used it in the past is that it requires a TRUSTED
    certificate.
    See so it's not that exchange 2013 requires a trusted certificate - it's that exchange 2013 lacks the feature that was there since Windows 2000 and Exchange 5.5.
    So it's time for you to install an Active Direction Certificate Authority. Refer to this wonderful article for exact steps - http://careexchange.in/how-to-install-certificate-authority-on-windows-server-2012/
    Now even after you do that - it won't work because you have to add the base private key certificate, which you can download now from your internal certsrv site, to Default Domain Policy (AND yes some people claim NEVER mess with the Default Domain Policy,
    always make an addition one... it's up to you - I don't see direct harm if you know what you want to accomplish) see this: http://technet.microsoft.com/en-us/library/cc738131%28v=ws.10%29.aspx if you want to know exact steps.
    This is the moment of ZEN! :) Do you feel the excitement? After all it is your first time. Before we get too excited lets first request and then install the certificate to actual Exchange via the gui and assign it to all the services you can (IIS, SMTP and
    there is a 3rd - I forgot, but you get the idea).
    Now go to your client machine where you have the outlook open, browse to your exchange server via https://exchang2013/ in IE and if you don't get any certificate errors - it's good. If you do run on hte client and the server: gpupdate /force This will refresh
    the policy. Don't try to manually install the certificate from Exchange's website on the client. If you wanna do something manually to it to the base certificate from the private key but if you added it to the domain policy you shouldn't have to do it.
    Basically the idea is to make sure you have CA and that CA allows you to browse to exchange and you get no cert error and you can look at the cert and see that's from a domain CA.
    NOW, you can configure your outlook. EASY grasshoppa - not the manual way. WHY? Cause the automatic way will now work. :) Let it discover that exachange and populate it all - and tell you I'm happy! :)
    Open Outlook - BOOM! It works... Was it as good for you as it was for me?
    You may ask, why can't I just configure it by manual - you CAN. It's just a nightmare. Go ahead and open the settings of the account that got auto configed... How do you like that server name? It should read something like [email protected]
    and if you go to advanced and then connection tab - you'll see Outlook Anywhere is checked as well. Look at the settings - there is the name of the server, FQDN I might add. It's there in 2 places and one has that Mtdd-something:Exchange2013.yourdomain.local.
    So what is that GUID in the server name and where does it come from. It's the identity of the user's mailbox so for every user that setting will be different but you can figure it out via the console on the Exchange server itself - if you wish.
    Also a note, if your SSL certs have any trouble - it will just act like outlook can't connect to the exchange server even though it just declines the connection cause the cert/cert authority is not trusted.
    So in short Outlook Anywhere is EVERYWHERE! And it has barely any gui or config and you just supposed to magically know that kind of generic error messages mean what... Server names are now GUIDs of the [email protected] - THAT MAKES PERFECT
    SENSE MICROSOFT! ...and you have to manage certs... and the only place where you gonna find the name of the server is inside the d*** Outlook Anywhere settings in the config tab, un it's own config button - CAN WE PUT THE CONFIG ANY FURTHER!
    Frustrating beyond reason - that should be Exchange's new slogan...
    Hope this will help people in the future and won't get delete because it's bad PR for Microsoft.
    PS
    ALSO if you want to pick a fight with me about how SSL is more secure... I don't wanna hear it - go somewhere else...

  • Error Message: The data provider required to connect to the local data file could not be found. The file will be added to the project but the typed DataSet associated with the file will ont be generated.

    I am currently taking the course "C# Fundamentals" from Bob Tabors site:  LearnVisualStudio.NET .  I'm on day 8 where we are integrating SQL Server with C#.  In Lesson 6 and 7, (havent' gotten past these two) I am having an
    issue.  In Lesson 6, "Retrieving Data with ADONET 2 in a Connected Scenario (SQL Server Compact Edition) when I go to create the data file I get the error message I typed into the title.  I cannot get the db to appear in the Solution
    Explorer, or open up so I can create a table.  When I try to skip by that lesson, and complete lesson7, I'm running into the same basic problem, where the error messages are different but I get an exception when I attempt to run the program.  Even
    though I have been able to create the table for this lesson, the Exception states I have an invalid object name 'Customers' which is the name of the table. I have tried reading several different suggestions for correcting this issue, but have not understood
    exactly what I need to do.  Since I work from home on my computer, I do not want to do anything with files or downloads unless I understand what I need to do, cause I do not want to lose my job.  Any help would be appreciated. 
    JennyBarrett7

    Hi
    According to your error message, we need to verify if Microsoft SQL Server Compact appears in the
    Change Data Source dialog. If not, you need to install
    SQL Server Compact components for Visual Studio firstly, and if you choose to install SQL Server Compact 4.0 , you should note that SQL Server Compact 4.0 supports in Visual Studio 2010 Service Pack 1 or later versions. I recommend you to install the latest
    Service Pack (SP) of SQL Server Compact, and latest SP of Visual Studio, then check if the error still occurs. For more information, see:
    http://blogs.msdn.com/b/sqlservercompact/archive/2011/03/15/sql-server-compact-4-0-tooling-support-in-visual-studio-2010-sp1-and-visual-web-developer-express-2010-sp1.aspx
    However if there is no problem with the installation of SQL Server Compact, it will be an issue that regards ASP.NET and website deployment. I suggest you to post the question in the ASP.NET forums at
    http://forums.asp.net/ . It is appropriate and more experts will assist you.
    In addition, you can review the following link:
    Working with SQL Server Compact in Visual Studio:http://msdn.microsoft.com/en-us/library/gg606540(v=vs.100).aspx
    Thanks
    Lydia Zhang

Maybe you are looking for