Many processes consumed in  ECC server when executing infopackage in BW

Hi,
I know this question was asked many times but I can't figure out why it's not working.
In fact, when i execute an infopackage, a lot of dialog process are used in the backend , making the system not accessible (windows server)
I tried to make some changes in SMQS and RSA1_TRFC_OPTION_SET but it has no effect in the system.
Can someone please, explain step by step what can I do to configure the 2 systems ?
Thanks a lot,
Regards,
Chea-Lie

HI,
You can maintain the Control Parameters for Data Load in your ECC system.
Goto Transaction SBIW
General Settings->Maintain Control Parameters for Data Transfer
Refer OSS Note : 417307
This should help.
Please check the following documents on Load Performance.
[Performance Tuning Massive SAP BW Systems - Tips & Tricks|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94]
Do go through the checklist in the below document.
[BW Performance Tuning|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/701382b6-d41c-2a10-5f82-e780e546d3b6]
Regards,
Gaurav

Similar Messages

  • Java Process Consuming 100% of Server CPU

    Hi,
    We have a new instance of CQ5 (version 5.4) running on a Windows 2003 SP2 Server. For some reason, which we can't determine, after a period of time a Java process on the system begins to consume 100% of the CPU and the system becomes totally unresponsive. Has anyone seen this before? Is there a fix for this?
    RK         

    Does the it eventually recover on it's own? It's possible that the JVM is doing full garbage collection.  How big is your max heap size?  I suggest to add some garbage collection debugging in the JVM_OPTS to see if it is Java's garbage collection or not.  Might as well check the heap too, to see if it's running our of memory.
    Example;
    -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintTenuringDistribution -Xloggc:/tmp/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp
    Ron

  • DB connect Datasource is giving error when executing infopackage from SQL

    Hi,
    I have created DB connect data source for SQL data base. Whenever I will try to execute the Infopackage , it is giving below error and when I checked status of job in data source, it stays in yellow.
    SQL error "99" with message: "[Microsoft][SQL Server Native Client 10.0]Numeric
    Message no. RSDS_ACCESS023
    Any suggestion in this.... Thanks.
    Regards
    Deepa.

    Yes, I resolved it.
    Please ask your admin guys do following steps - it will resolve the issue:
    3. Possible problem areas
    a) Datetime data type in source system tables
    The Datetime DB data type and similar (timestamp, smalldatetime) are not used in SAP installations. This type of data cannot be transferred consistently into the BW system without additional activities.
    Solution:
    You must create a view in the source system to transform the data. Since a field of the type DateTime actually represents two SAP Basis types (DATES, TIMES), you must split it into a maximum of two fields, as appropriate:
      create view <VIEWNAME> as
      select
      VD_1_D = convert(varchar(4), datepart(yyyy, d_1))
             + case len(convert(varchar(2), datepart(mm, d_1)))
                  when 1 then '0' + convert(varchar(1), datepart(mm, d_1))
                  else convert(varchar(2), datepart(mm, d_1))
                  end
             + case len(convert(varchar(2), datepart(dd, d_1)))
                  when 1 then '0' + convert(varchar(1), datepart(dd, d_1))
                  else convert(varchar(2), datepart(dd, d_1))
                  end,
      VT_1_T =  case len(convert(varchar(2), datepart(hh, t_1)))
                  when 1 then       /* Hour Part of TIMES */
                        case convert(varchar(2), datepart(hh, t_1))
                          when '0' then '24'    /* Map 00 to 24 ( TIMES ) */
                          else '0' + convert(varchar(1), datepart(hh, t_1))
                        end
                  else convert(varchar(2), datepart(hh, t_1))
                  end
             + case len(convert(varchar(2), datepart(mi, t_1)))
                  when 1 then '0' + convert(varchar(1), datepart(mi, t_1))
                  else convert(varchar(2), datepart(mi, t_1))
                  end
            + case len(convert(varchar(2), datepart(ss, t_1)))
                  when 1 then '0' + convert(varchar(1), datepart(ss, t_1))
                  else convert(varchar(2), datepart(ss, t_1))
                  end
      from <TABLENAME>
    The t_1 Basis field in the table is converted into a DATES field VT_1_D      and a TIMES field VT_1_T. During the generation of the VT_1_T field, the value for 00:00 hours is also converted to 24:00 hours. This conversion may be omitted if not required.
    b) Float data type in source system tables
    In accordance with the IEEE standard regulations, MS SQL Server supports a value range of 1x10E-307 to +go- 1x10E+308. This also applies to the corresponding ABAP data type. However, the BW database in question may restrict this. Depending on the system, a restriction to 1x10E-25 is possible.
    c) Nvarchar data type in source system tables
    The length information displayed is twice as big as the number of characters you specified when you created the column.
    d) Writing names in the source system
    Since the R/3 kernel used in BW can only process table and column names written in upper case, on the source system, you must create DB views that convert the original names in accordance with this rule.
    e) Code page and sort sequence of the source system
    R/3 kernel-based systems such as BW basically assume that the database being used was created using code page cp850 and with the sort sequence 'bin2'.
    The configuration of the source system may differ. If the sort sequence is different, operations for string search (like) and area search (between, >, <) on character fields may return different results.
    Solution:  There is currently no solution.
    If you use multibyte code pages in the source system to save data with character sets of more than 256 characters (Kanji, Hiragana, Korean, Chinese and so on), the characters may be corrupted as a result.
    Solution:  There is currently no solution.
    f) Authorizations and visible objects
    BW DB Connect uses a remote server as a source system to extract data. The data already exists in that remote system. To use BW DB Connect, you will create a new login and new views which the SAP BW system will use when connecting.
    You must remember only the DB objects (tables and views), which directly belong to the new database login are visible.
    You cannot use a login that belongs to the Sysadmin server role, (for example 'sa').
    To create the new login and views:
    1. Create a new login and map it to a user in the database with membership in db_ddladmin
    2. Grant the new user (SELECT at least) access to the original data tables.
    3. Login as this new login and create the views in the database/schema of that user, referring to the original (source) data tables.
    4. Specify this new login in the data extraction and this login/user will read from his own views in his own schema.
    5. Optionally you can switch the new user's role to db_datareader which is sufficient for reading data. Further changes to the remote system e.g. new views or changed views will require the db_ddladmin role again.

  • Runtime prediction when executing InfoPackage

    In the Header tab while monitoring a dataload, there is a field that says runtime. In this field, it says '1h 48m 33s = 0.75 % of the predicted length of 10 Day(s) 0s'
    How does the system get this value?  And should this be a guideline on how long it would take?
    Im  extracting about 2.5 million records based on a Datasource view with 4 dables

    hi,
    The runtime tab shows 1hour = 10% of previous maximum length 10hours         means,  this load was taken 10 hours time to complete at certain time in the past, that acts as a base line to the current loads. if current load takes 11 hour then it will be base line.

  • Execute Infopackage Through BAPI Using Excel Macro (BAPI_IPAK_START)

    Hi everyone,
    I have a problem when execute infopackage through BAPI using excel macro. I have create a button in macro. When this button is clicked, BAPI for trigger InfoPackage will executed (BAPI_IPAK_START) and this button will disabled.
    After this process completely done (traffic indicator for the request is green in update rules), the button will enabled again.
    Here the subroutine or pseudocode that i will write :
    Private Sub ClickButton()
    Begin
    1. ThisButton.Activated = False   --> Disabled Button
    2. Call BAPI/custom Function Module to execute InfoPackage (BAPI_IPAK_START)
    4. ThisButton.Activated = True   --> Enabled Button
    End
    The problem is i need some statement like this between statement no 2 and statement no 4
    Statement That I Want :
    3. Wait Until BAPI Execute Completely
    So user can click this button again only after the process is finished completely. I don't know how to do this in macro (in ABAP i know i can use "WAIT ... SECOND"), others said this can be done using event in schedule option at infopackage. Anyone,please help me.
    Thank you.
    Regards,
    Satria B

    Enter that req number in RSRQ and monitor the load
    or  right click on the DS - manage - you will req in yellow status which is in progress and you can click on the ...takes you monitor screen
    Edited by: Srinivas on Jul 6, 2010 7:51 AM

  • Process runs faster when executed independently, but runs slower when a parallel big process is running

    Hi,
    The issue is,  a process, say "A" is running fine on certain days. On some days there are additional processes running on the server. When these additional processes are running, "Process A" suffers performance issue.
    Interesting point is that, these additional processes were running since long time. But, earlier "Process A" was running fine even when these additional process were running. Suddenly from past 2 or 3 weeks there is performance issue in "Process
    A", when these additional process run.
    Note: Nothing has been modified for the process A.
    Process A is an SQL job which has SSIS Package and stored proc in different steps
    When multiple parallel processes are running, SSIS Package step suffers around 40% increase in execution time, where as stored procs have only 15-20% increase in execution time
    When Process A is executed while no other big processes are running, it's execution time is fine. From past few days, issue is only when, some other big parallel processes are running.
    Currently below is my analysis:
    Since Process A is running fine when it is executed independently, I assume there is no issue in this process.
    Since issue occurs when some other big process is running, when Process A is running, I believe it is DiskIO issue. Will the issue be resolved if RAM size is increased?
    Is there any way to check if RAM is being fully utilized by the server
    Is there any other possibility why there is a sudden dip in performance when parallel processes are running.
    Is there any possibility of having issues in additional processes. Incase if it has some issues, does it impact "Process A"
    Please let me know if you need any further information. Infact I am not able to diagnose what is the actual root cause for the performance issue in "Process A" as nothing has been modified.
    Also it would be very much helpful, if I get any idea on different ways in reaching the actual root cause of this performance issue.
    NOTE: This is a data warehouse
    Thanks,
    Raksha

    When a query has a parallel plan, it will in general try to grab all cores up to the maxdop setting, but then it often uses them inefficiently. 
    But in that case, there are better odds for the queries not battling on resources with each other!
    What Josh alludes is to the fact that SQL Server needs to partition the data, so that different partitions are processed on different cores. This partitioning is based on statistics, and the statistics may be out of date or not accurate enough. This may
    result in that the data is not partitioned proportionally, and some threads gets very little data to work with. Thus, these CPUs are still idle, so there may still be room for the two processes to run at full speed. (That is, full speed with the given plan,
    which is not the full speed, had the partitioning been accurate.)
    I mention this because you asked about parallelism, and many systems leave maxdop at its default setting of 0 (which means "go ahead and grab everything!") even though Microsoft recommends you set it to a different number depending on this and
    that.
    Since this is a data warehouse, Raksha should not tamper with "max degrees of parallelism", I think.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Firefox does not open when clicking on the icon, the firefox.exe process consumes 99 % of the CPU and my computer runs slowly.

    I have been a happy user of Firefox for many years. Around last April first I opened Firefox and was offered an update to v 4.42.0.0 which I accepted. Installation seemingly went well. I forget now exactly what happened but the result has been that ever since April 1 I haven’t been able to open the Firefox browser when clicking on the Firefox icon. My computer now was running very slowly. I tried to uninstall Firefox, but a popup told me I couldn’t because Firefox was in use. This confused me because I couldn’t see it being used. Only now I have found that on Task Manager processes that “firefox.exe” was consuming 99 % of the CPU. After removing firefox.exe by clicking End Process my computer ran better. I uninstalled Firefox I had on my computer and installed Firefox 5.0. Unfortunately I have the same problems: Firefox does not open when clicking on the icon, the firefox.exe process consumes 99 % of the CPU and my computer runs slowly.

    Born2die! Brilliant. I am a desktop clicker and never knew Firefox had a safe mode.
    Thank You!
    I was unable to start in safe mode initially. The second time I disabled all of the Add-Ons and she started up just fine. I enabled them one by one hoping to track down the culprit but the problem seems to have gone away.
    BTW I am running ver. 3.6.8 (in response to cor-el's earlier post) and
    Firefox is in the process of downloading 3.6.10 (which I am starting to think may have been what caused this whole problem to begin with)
    Incidentally, whats up with all of the Java Console Add Ons?
    I have:
    Java Console 6.0.11, Java Console 6.0.13, Java Console 6.0.15
    Java Console 6.0.17, Java Console 6.0.20, and Java Console 6.0.21
    What are they? Do I need them? Can I uninstall them? Is this due to using Open Office?
    Also, .NET Framework 0.0.0 Should I uninstall it?

  • ADF how to display a processing page when executing large queries

    ADF how to display a processing page when executing large queries
    The ADF application that I have written currently has the following structure:
    DataPage (search.jsp) that contains a form that the user enters their search criteria --> forward action (doSearch) --> DataAction (validate) that validates the inputted values --> forward action (success) --> DataAction (performSearch) that has a refresh method dragged on it, and an action that manually sets the itterator for the collection to -1 --> forward action (success) --> DataPage (results.jsp) that displays the results of the then (hopefully) populated collection.
    I am not using a database, I am using a java collection to hold the data and the refresh method executes a query against an Autonomy Server that retrieves results in XML format.
    The problem that I am experiencing is that sometimes a user may submit a query that is very large and this creates problems because the browser times out whilst waiting for results to be displayed, and as a result a JBO-29000 null pointer error is displayed.
    I have previously got round this using Java Servlets where by when a processing servlet is called, it automatically redirects the browser to a processing page with an animation on it so that the user knows something is being processed. The processing page then recalls the servlet every 3seconds to see if the processing has been completed and if it has the forward to the appropriate results page.
    Unfortunately I can not stop users entering large queries as the system requires users to be able to search in excess of 5 million documents on a regular basis.
    I'd appreciate any help/suggestions that you may have regarding this matter as soon as possible so I can make the necessary amendments to the application prior to its pilot in a few weeks time.

    Hi Steve,
    After a few attempts - yes I have a hit a few snags.
    I'll send you a copy of the example application that I am working on but this is what I have done so far.
    I've taken a standard application that populates a simple java collection (not database driven) with the following structure:
    DataPage --> DataAction (refresh Collection) -->DataPage
    I have then added this code to the (refreshCollectionAction) DataAction
    protected void invokeCustomMethod(DataActionContext ctx)
    super.invokeCustomMethod(ctx);
    HttpSession session = ctx.getHttpServletRequest().getSession();
    Thread nominalSearch = (Thread)session.getAttribute("nominalSearch") ;
    if (nominalSearch == null)
    synchronized(this)
    //create new instance of the thread
    nominalSearch = new ns(ctx);
    } //end of sychronized wrapper
    session.setAttribute("nominalSearch", nominalSearch);
    session.setAttribute("action", "nominalSearch");
    nominalSearch.start();
    System.err.println("started thread calling loading page");
    ctx.setActionForward("loading.jsp");
    else
    if (nominalSearch.isAlive())
    System.err.println("trying to call loading page");
    ctx.setActionForward("loading.jsp");
    else
    System.err.println("trying to call results page");
    ctx.setActionForward("success");
    Created another class called ns.java:
    package view;
    import oracle.adf.controller.struts.actions.DataActionContext;
    import oracle.adf.model.binding.DCIteratorBinding;
    import oracle.adf.model.generic.DCRowSetIteratorImpl;
    public class ns extends Thread
    private DataActionContext ctx;
    public ns(DataActionContext ctx)
    this.ctx = ctx;
    public void run()
    System.err.println("START");
    DCIteratorBinding b = ctx.getBindingContainer().findIteratorBinding("currentNominalCollectionIterator");
    ((DCRowSetIteratorImpl)b.getRowSetIterator()).rebuildIteratorUpto(-1);
    //b.executeQuery();
    System.err.println("END");
    and added a loading.jsp page that calls a new dataAction called processing every second. The processing dataAction has the following code within it:
    package view;
    import javax.servlet.http.HttpSession;
    import oracle.adf.controller.struts.actions.DataForwardAction;
    import oracle.adf.controller.struts.actions.DataActionContext;
    public class ProcessingAction extends DataForwardAction
    protected void invokeCustomMethod(DataActionContext actionContext)
    // TODO: Override this oracle.adf.controller.struts.actions.DataAction method
    super.invokeCustomMethod(actionContext);
    HttpSession session = actionContext.getHttpServletRequest().getSession();
    String action = (String)session.getAttribute("action");
    if (action.equalsIgnoreCase("nominalSearch"))
    actionContext.setActionForward("refreshCollection.do");
    I'd appreciate any help or guidance that you may have on this as I really need to implement a generic loading page that can be called by a number of actions within my application as soon as possible.
    Thanks in advance for your help
    David.

  • How to execute a process in a certain server

    Hi all, We have a scheduled job to be executed in a certain server. This job trigger some process and the system do the load balancing, but we want that this processes runs in the same server than the job
    Is possible to do this?, How to force the system to execute this processes in the same server than the job?
    thanks.

    Juan,
    If you are speaking of a scheduled job - like one you can find in SM37, then you can set a specific server as the target server - like the CI.
    First, check the following:
    SM61 - make sure you have operation modes setup for the servers. The filed EXEC TARGET in the SM36/SM37 job definition reads for values based upon what you have setup here. The servers in the landscape will show up by default, but you can build a group to include/exclude certain servers as you see fit.
    Next, in either SM36 when building a job, or in SM37 to change an existing job, open the job in change mode. On the first screen, you'll see a field labeled "Exec. Target" - use the match code values (F4 help) to select the target server you want. Save the job, and you are all set.
    Hope this helps.
    -Tim

  • Error when executing external workflow process

    OWB 9.2 with server on Windows NT.
    I can successfully execute a mapping workflow process from the deployment manager, but I get an error when executing a simple external process:
    command: move
    parameters: ?c:\\data\\owbtest\\src.txt?c:\\data\\owbtest\\trg.txt
    Resulting output:
    Create Process: move c:\data\owbtest\src.txt c:\data\owbtest\trg.txt error=2
    File c:\data\owbtest\src.txt exists.
    My questions:
    1. What am I doing wrong?
    2. What does error=2 mean? Is this a Windows error?
    3. Do I have to configure some location for the process? What does 'use default location' mean for the configuration property Working Location? Is a host-logon performed before the host-command is executed?
    4. Is there logging available for an external process? In the workflow tables or views?
    5. I can't find much documentation for these questions. Is there more documentation than the OWB user guide and the OWF guide?
    Jaap.

    It seems that commands that are not an executable in some directory, but are part of the Windows kernel (like 'move'), need to be started as a parameter of the cmd command. I did this and now it works fine.

  • Error processing request in sax parser: Error when executing statement...

    Hello,
    I want to INSERT data from R/3 System to AS400 via JDBC adapter into a DB2 database. The interfaces from R/3 are Ok. but i have some problems to use the JDBC in DB2 Systems. The message in comunitation channel is:
    " Error processing request in sax parser: Error when executing statement for table/stored proc. 'SPE106TST' (structure 'STATEMENT'): java.sql.SQLException: SPE106TST de SADMT1 no válido para la operación."
    in the SXMB_MONI -> Request Message Mapping payloads this:
    The connection to the database is fine, Sender adapter with a SELECT * works perfect.
    Please Can anyone help me solve this problem? I'm lost.
    Best regards,
    Edited by: Nicola Occhipinti on May 22, 2008 7:40 PM

    Hi Nicola,
    This error occurs when the receiver side structure is incorrect.
    Your structure seems to be correct.
    Please use lower case for action, access and table.
    Please check whether the field names are exactly the same as in the actual Database table sadmt1.SPE106TST.
    Check if the table has permissions to write.
    You can try an alternate structure without using table tag.
    <ns0:MT_XMLSQL_SPEC xmlns:ns0="urn:damm.com/pi/EmployeeMasterData">
    <STATEMENT>
    <sadmt1.SPE106TST action="INSERT">
    <access>
    <CODEMP>D</CODEMP>
    <CODPRO>00202339</CODPRO>
    <NOMPRO>ROSIQUE PERALSGENIS</NOMPRO>
    <DIRPRO>GIRONA</DIRPRO>
    <POBPRO>S. VICENS HORTS</POBPRO>
    <RUTA>0</RUTA>
    <ORDEN>0</ORDEN>
    <NOMINA>S</NOMINA>
    </access>
    </sadmt1.SPE106TST>
    </STATEMENT>
    </ns0:MT_XMLSQL_SPEC>
    Hope your problem gets solved.
    -Shamly

  • Process flow failes when executing a mapping by error ......

    hi all,
    When we executes a mapping through owb normally it works fine.
    But the same when executes throgh a process flow it get falies with the f/w error.
    *"Set based mode not supported ORA-06512"*
    The operating mode is "set based fali over to row based"
    In the mapping we are doing some transforamtion and putting the data in a flat file.
    Please help in this reagrd.
    Regards
    ashok

    Hi,
    change in processflow configuration property OPERATING_MODE for this mapping to ROW_BASED or ROW_BASED_TARGET_ONLY
    Hope this helps!

  • Mail crashes my web server - too many processes

    Just a bit of information for anyone out there who has found their web server having trouble during access by Leopard Mail.
    I have 1 computer using Leopard. The Mail app is setup with 5 accounts (i.e. orders, service, accounting, etc.). The accounts are all IMAP. Our website is hosted at Bluehost.com.
    Mail starts so many processes accessing these 5 accounts simultaneously (instead on serially one after another) that it maxs out my web server processes and renders anything that needs additional processes such as FrontPage extensions, PHP, etc. unavailable. I'm not sure if the problem is associated with how many or improper closure of processes but I can tell you this. Mail with 1 user breaks my server and quitting Leopard Mail closes all of the processes and restores the functionality of my web server.
    Apple...could you put an option in to allow serial access to the email accounts or really look at processes at this is a big problem. Thanks.

    Same problem here. You may want to check out the thread "My ISP has blocked my IP address because of Leopard IMAP configuration" under http://discussions.apple.com/thread.jspa?messageID=5780594&#5780594

  • Barcode error when executing it in Web Server, using BAR_CODE_39

    Hello. following error is evidenced when executing the Barcode sample in web-server :
    REP-108: File '/tmp/srw088834566' not found
    The variable CLASSPATH y DISPLAY were changed.
    I miss it the fact is that it works when I use BAR_CODE_128.
    With 39 he encounters error REP-108.
    Your suggestions
    Karina Gaona V.

    you mwy check this to trouble shoot front end
    Troubleshoot the SAP NetWeaver 2004s BI Frontend Installation

  • Phone hog links give me error message "an error occuredon the server when processing the URL

    When ever I click on the links sent from phone hog.com to earn free minutes I get the following error message. "An error occurred on the server when processing the URL. Please contact the system administrator" I have made several contacts with phone hog help and they claim it is my settings on my computer. However I contacted my internet provider TDS telecom and they indicated that this is a phone hog issue. I do not have any other problems with any other links I click on with this type of error message.

    Hello msFit,
    it's well known, that in all these cases you describe I'm not a friend of a detailed troubleshooting. To be able to be independent in all this things It is one of the reasons why I prefer an external FTP program. The difficulties with which you have to fight encourage me in this opinion, not least because we always search for experts, we don't charge a "jack of all trades".
    To manage several websites or to upload my files and sometimes for the opposite way, for a necessary download from my server or to use a "a site-wide synch", I'm using FileZilla. It simply looks easier for me to keep track of all operations precisely and generate or reflect easily the desired tree structure.
    Above all, FileZilla has a feature (translation from my German FileZilla) called "compare file list". Here it's possible to use file size or modification time as a criterion. There is also the possibility to "hide identical files", so that only these files which you want to redact remain visible.
    And even if it means you have to install a new program, I am convinced that there is an advantage. Here is the link to get it and where you can read informations about how it works:
    http://filezilla-project.org/ and http://wiki.filezilla-project.org/Tutorial#Using_the_site_manager
    Mac: Mac OS X (Use: Show additional download options)
    http://filezilla-project.org/download.php
    Of course, you also need all the access data to reach your server and for MIME issues, you should contact your web host/provider.
    Good luck!
    Hans-Günter
    P.S.
    Since I use two screens, the whole thing became even more comfortable.

Maybe you are looking for

  • Problems on new release

    I just downloaded what I think is the latest raptor version 1.0.0.09.19, and I hit the following: I can not do an explain plan on a sql statement, I get the error: invalid column name I can not set up a connection via tnsnames, I get Status: Failure

  • Creating A Sub-Mix?

    Hello Everyone, I am pretty much a novice in Logic 7.I know that it is possible to create a submix(the same tracks in the song with a different master effect) of the current track that I am working on. This is my objective: I have 4 vocal audio track

  • How do I install Topaz plug in to PS cc

    how do I install Topaz plug in to PS cc?

  • Blank Space at Bottom of Inserted Code

    It appears to me as a bug. See example here:  http://social.msdn.microsoft.com/Forums/en-US/a6eacddb-68ae-4576-af77-69c057102d5a/decision-factor-benefits-of-computed-columns?forum=transactsql Image: Kalman Toth Database & OLAP Architect SELECT Video

  • I'm trying to print a photo from my phone via eprint.

    Trying to print a pic from my phone via eprint, but the only thing being printed is text (sent from my iphone) and not the actual photo. can anyone help me?