Design considerations with clustering

anybody have any docs (besides the weblogic ones)/comments on things to
          consider when designing an app/sys (jsp/serlvet/ejb) that will be deployed
          in cluster.
          thanks.
          Tinou Bao
          http://www.tinou.com
          mailto:[email protected]
          

I too use stateful session beans when appropriate. My suggestion is to
          avoid clustering them (WL added the feature in 6.0). I don't doubt that the
          replication works, however a web-based architecture with HTTP session
          replication is typically sufficient and already expensive enough. Designing
          an application that requires EJB replication introduces a lot of variables
          that I believe are best avoided.
          Peace,
          Cameron Purdy
          Tangosol, Inc.
          http://www.tangosol.com
          +1.617.623.5782
          WebLogic Consulting Available
          "giri" <[email protected]> wrote in message
          news:[email protected]...
          > Cameron,
          > Is there any specific reason for not using stateful session EJBs in a
          cluster?
          > We are using'em to store search results in our application and the data is
          not
          > actually sensitive. So it need not to be replicated. If we lose that
          > information we can seacrh again.
          >
          > thanks
          > --giri
          >
          > Cameron Purdy wrote:
          >
          > > 1) The only "global variable" in the cluster is the database -- static
          > > doesn't cut it
          > > 2) Don't load up your HTTP sessions with non-transient data because it
          will
          > > get carted around your network and kill performance
          > > 3) If you change something in the session that needs to be replicated,
          you
          > > must actually call setAttribute on the session or it will not get
          replicated
          > > 4) Simple tasks like time-based or periodic application events become a
          > > real pain -- how do you get exactly one server -- no more no less -- in
          the
          > > cluster to run something, and have it guaranteed to be run. Avoid these
          if
          > > possible.
          > > 5) Don't cluster stateful session EJBs unless you have a really good
          reason
          > > to.
          

Similar Messages

  • Design considerations for mobile version of website

    My company has just implemented a new version of our coporate website using Oracle Portal, and ADF. However we do not have comprehensive mobile support but it is required. From my research I've learned we should be using ADF mobile. However, the resources I have found have been geared more at developers and I work as a Business Analyst. Can you point me to online resources (white papers, tutorials etc.) that deal with requirements and design considerations for porting over to a mobile version?
    Thanks in advance.

    My apologies for lateness in checking this thread.
    Here is an article I wrote that might help. Most of the paper is geared toward developer, but the first few sections talks about best practices for mobile browser app in general, not just for iPhone. http://www.oracle.com/technology/pub/articles/huang-iphone.html
    You can also check out this link: http://www.orato cle.com/technology/tech/wireless/adf_mobile.html.
    As for design consideration, a few rules of thumb:
    - First, obviously use ADF Mobile and Trinidad components. We put a lot of effort in adding rendering support for different mobile devices.
    - Next, figure out what devices you want to support. Browsers found in smartphones varies greatly, and in consumer/feature phones, the support is even less consistent. In general you should be able create one app/set of screens for all mobile devices, but you should plan on having ability to test it out on different devices.
    - Determine what information is really needed by mobile user. Not all information available in desktop application may be applicable for mobile users
    - Design your mobile screens based on a few key principals:
    - Display data as user needs it, instead of trying to display everything. For example, instead of displaying master-detail data on the same screen, break it out into 2 screens. Master data may be a list, and user would click on a particular piece of data to look at the details of that master record.
    - Provide navigation buttons on each screen, and ensure they are easy to get to on a page. For example, using the iPhone paradigm, there is a navigation bar at the top of the page where you can go between views.
    - Place command buttons at location where it's easy for user to get to. For example, if you need to allow user to modify a long list of fields, you may want to place save button on top and button of the screen, so user can easily get to it without having to scroll around too much.
    - Use style sheets to achieve mobile-platform optimized UI. For example, if you intend to support touch screen devices (iPHone, BB Storm, etc), then style your application so buttons, command links, etc, are big enough so it's easy to get to. For non-touch screen devices, then it's OK to compress the UI, but ensure user can easily flow between controls to get to the functionality they need. For example, using a trackball to scroll to a button.
    Thanks,
    Joe Huang

  • Design issue with the multiprovider

    Design issue with the multiprovider :
    I have the following problem when using my multiprovider.
    The data flow is like this. I have the info-objects IobjectA, IobjectB, IobjectCin my Cube.(Source for this data is s-systemA)
    And from another s-system I am also loading the masterdata for IobjectA
    Now I have created the multiprovider based on the cube and IobjectA.
    However, surprisingly join in not workign in multiprovider correctly.
    Scenario :
    Record from the Cube.
    IObjectA= 1AAA
    IObjectB = 2BBB
    IObjectC = 3CCC
    Records from IobjectA =1AAA.
    I expect the record should be like this :
    IObjectA : IObjectB: IObjectC
    1AAA       :2BBB       :3CCC
    However, I am getting the record like this:
    IObjectA : IObjectB: IObjectC
    1AAA       :2BBB       :3CCC
    1AAA         : #             :#
    In the Identification section I have selected both the entries for IobjectA still I am getting this error.
    My BW Version is 3.0B and the SP is 31.
    Thanks in advance for your suggestion.

    May be I was not clear enough in my first explanation, Let me try again to explain my scenario:
    My Expectation from Multi Provider is :
    IObjectA
    1AAA
    (From InfoObject)
    Union
    IObjectA     IObjectB     IObjectC
    1AAA     2BBB     3CCC
    (From Cube)
    The record in the multiprovider should be :
    IObjectA     IObjectB     IObjectC
    1AAA     2BBB     3CCC
    Because, this is what the Union says .. and the Definition of the multiprovider also says the same thing :
    http://help.sap.com/saphelp_bw30b/helpdata/EN/ad/6b023b6069d22ee10000000a11402f/frameset.htm
    Do you still think this is how the behaviour of the multiprovider.. if that is the case what would be the purpose of having an infoobject in the multiprovider.
    Thank you very much in advance for your responses.
    Best Regards.,
    Praveen.

  • I have design standard with creative cloud, I have installed this on 2 computers, my office and home which I was told was allowed, I have just tried to open an indesign file from the office at home and and error message said that this was created with a n

    I have design standard with creative cloud, I have installed this on 2 computers, my office and home which I was told was allowed, I have just tried to open an indesign file from the office at home and and error message said that this was created with a newer version? they are the same versions and both are up to date, I was asked this morning to put in my adobe id email and password to connect to CC which I have never been asked to do before, can anyone help?

    what's your home version (click help>about) and was that just a warning so you were able to open the file?

  • Error while creating table with clusters

    Hi
    I tried the following
    CREATE CLUSTER emp_dept (deptno NUMBER(3))Cluster is created
    Then i tried to create the table having the above cluster but giving the errors:
    create table emp10 (ename char(5),deptno number(2) )cluster emp_dept(deptno);The error is:
    ORA-01753 column definition incompatible with clustered column definitionCould you please help me in this

    Your cluster is based on a NUMBER(3) data type while the emp10 table has a deptno column with a data type of NUMBER(2).

  • When table with clustered columnstore indexe is partitioned the performance degrades if data is located in multiple partitions

    Hello,
    Below I provide a complete code to re-produce the behavior I am observing.  You could run it in tempdb or any other database, which is not important.  The test query provided at the top of the script is pretty silly, but I have observed the same
    performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
    observed on my machine).  Here are the steps with numbers corresponding to the numbers in the script:
    1. Run script from #1 to #7.  This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
    2. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 5514 ms, 
    elapsed time = 1389 ms.
    3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
    4. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 828 ms, 
    elapsed time = 392 ms.
    As you can see the query is clearly faster.  Yay for columnstore indexes!.. But let's continue.
    5. Run script from #10 to #12 (note that this might take some time to execute).  This will move about 80% of the data in both tables to a different partition.  You should be able to see the fact that the data has been moved when running Step #
    11.
    6. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 8172 ms, 
    elapsed time = 3119 ms.
    And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
    I am not going to paste here execution plans or the detailed properties for each of the operators.  They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
    run (when all of the data resided on the same partition).
    So the question is: why is it slower?
    Thank you for any help!
    Here is the code to re-produce this:
    --==> Test Query - begin --<===
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    SELECT COUNT(1)
    FROM Txns AS z WITH(NOLOCK)
    LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
    WHERE z.RecordStatus = 1
    --==> Test Query - end --<===
    --===========================================================
    --1. Clean-up
    IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
    IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
    IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
    IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
    --2. Create partition funciton
    CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
    --3. Partition scheme
    CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
    --4. Create Main table
    CREATE TABLE dbo.Main(
    SetID int NOT NULL,
    SubSetID int NOT NULL,
    TxnID int NOT NULL,
    ColBatchID int NOT NULL,
    ColMadeId int NOT NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --5. Create Txns table
    CREATE TABLE dbo.Txns(
    TxnID int IDENTITY(1,1) NOT NULL,
    GroupID int NULL,
    SiteID int NULL,
    Period datetime NULL,
    Amount money NULL,
    CreateDate datetime NULL,
    Descr varchar(50) NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
    -- 40 mln. rows - approx. 4 min
    --6.1 Populate Main table
    DECLARE @NumberOfRows INT = 40000000
    INSERT INTO Main (
    SetID,
    SubSetID,
    TxnID,
    ColBatchID,
    ColMadeID,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
    TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
    ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
    ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --6.2 Populate Txns table
    -- 10 mln. rows - approx. 1 min
    SET @NumberOfRows = 10000000
    INSERT INTO Txns (
    GroupID,
    SiteID,
    Period,
    Amount,
    CreateDate,
    Descr,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
    Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
    Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
    CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
    Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --7. Add PK's
    -- 1 min
    ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Replace regular indexes with clustered columnstore indexes
    --===========================================================
    --8. Drop existing indexes
    ALTER TABLE Txns DROP CONSTRAINT PK_Txns
    DROP INDEX Main.CDX_Main
    --9. Create clustered columnstore indexes (on partition scheme!)
    -- 1 min
    CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Move about 80% the data into a different partition
    --===========================================================
    --10. Update "RecordStatus", so that data is moved to a different partition
    -- 14 min (32002557 row(s) affected)
    UPDATE Main
    SET RecordStatus = 2
    WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
    -- 4.5 min (7999999 row(s) affected)
    UPDATE Txns
    SET RecordStatus = 2
    WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
    --11. Check data distribution
    SELECT
    OBJECT_NAME(SI.object_id) AS PartitionedTable
    , DS.name AS PartitionScheme
    , SI.name AS IdxName
    , SI.index_id
    , SP.partition_number
    , SP.rows
    FROM sys.indexes AS SI WITH (NOLOCK)
    JOIN sys.data_spaces AS DS WITH (NOLOCK)
    ON DS.data_space_id = SI.data_space_id
    JOIN sys.partitions AS SP WITH (NOLOCK)
    ON SP.object_id = SI.object_id
    AND SP.index_id = SI.index_id
    WHERE DS.type = 'PS'
    AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
    ORDER BY 1, 2, 3, 4, 5;
    PartitionedTable PartitionScheme IdxName index_id partition_number rows
    Main PS_Scheme CDX_Main 1 1 7997443
    Main PS_Scheme CDX_Main 1 2 32002557
    Main PS_Scheme CDX_Main 1 3 0
    Main PS_Scheme CDX_Main 1 4 0
    Txns PS_Scheme PK_Txns 1 1 2000001
    Txns PS_Scheme PK_Txns 1 2 7999999
    Txns PS_Scheme PK_Txns 1 3 0
    Txns PS_Scheme PK_Txns 1 4 0
    --12. Update statistics
    EXEC sys.sp_updatestats
    --==> Run test Query --<===

    Hello Michael,
    I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
    Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 251 ms, elapsed time = 128 ms.
    As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
    (coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation". 
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • Overwrite mapping in design repository with the last one deployed

    Hi
    We have a Single Design Environment with 2 runtime environnment (Dev, Prod). When I log in the Dev Control Center, I see there is a mapping "changed". It has been changed by an employee who don't work here anymore and I have no clue what have been changed.
    There is no specific request or obvious reason for that change, and since it's a huge and conplex mapping, I would like to overwrite (or "rollback") the mapping in the design environnment with the last one deployed in Production.
    But I don't know how or if it's possible..! At least if I could compare the 2 version, I could tell if the change is good or not.
    To me, it's a showstopper for incoming modifications..
    There is no Snapshot and i'm using OWB 10gr2 since 1 month now, so all of these is new to me.
    I really need help on this one...!
    Thanks!

    one onerous way to do this is
    take the mdl exports of the mapping from both envs. Try opening them in an xml/html text editor(notepad++ for eg)
    If they look like they are encrypted, try renaming the .mdl file to .zip( I can do this in 11g and what this gives me is a .xml file and a .mdx file which can be viewed in an xml editor)
    You can then possibly do a text comparison to find if anything is obviously different.
    Another good option is something our friend Oleg developed.
    http://owbeg.blogspot.co.uk/2012/05/release-005-of-mapreconstruct-script.html
    This script extracts the mapping as an OMB script. You can then compare both mapping tcl outputs.

  • Producer/Consumer Design Pattern with Classes

    I'm starting a new project which involves acquiring data from various pieces of equipment using a GPIB port.  I thought this would be a good time to start using Classes.  I created a GPIB class which contains member data of:  Address, Open State, Error; with member vis such as Set Address, Get Address, Open, Close...general actions that all GPIB devices need to do.  I then created a child class for a specific instrument (Agilent N1912 Power Meter for this example) which inherits from the GPIB class but also adds member data such as Channel A power and Channel B power and the associated Member Functions to obtain the data from the hardware.  This went fine and I created a Test vi for verfication utilizing a typical Event Structure architecture. 
    However, in other applications (without classes) I  typically use the Producer/Consumer Design Pattern with Event Structure so that the main loop is not delayed by any hardware interaction.  My queue data is a cluster of an "action" enum and a variant to pass data.  Is it OK to use this pattern with classes?  I created a vi and it works fine and attached is a png (of 1 case) of it.
    Are there any problems doing it this way?
    Jason

    JTerosky wrote:
    I'm starting a new project which involves acquiring data from various pieces of equipment using a GPIB port.  I thought this would be a good time to start using Classes.  I created a GPIB class which contains member data of:  Address, Open State, Error; with member vis such as Set Address, Get Address, Open, Close...general actions that all GPIB devices need to do.  I then created a child class for a specific instrument (Agilent N1912 Power Meter for this example) which inherits from the GPIB class but also adds member data such as Channel A power and Channel B power and the associated Member Functions to obtain the data from the hardware.  This went fine and I created a Test vi for verfication utilizing a typical Event Structure architecture. 
    However, in other applications (without classes) I  typically use the Producer/Consumer Design Pattern with Event Structure so that the main loop is not delayed by any hardware interaction.  My queue data is a cluster of an "action" enum and a variant to pass data.  Is it OK to use this pattern with classes?  I created a vi and it works fine and attached is a png (of 1 case) of it.
    Are there any problems doing it this way?
    Including the error cluster as part of the  private data is something I have never seen done and ... well I'll have to think about that one.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Migrate the SharePoint 2007 Site containing SharePoint Designer Workflows with Infopath forms to SharePoint 2013

    Hi,
    We have a requirement to migrate the SharePoint 2007 Site containing  SharePoint Designer Workflows with Info path forms to SharePoint 2013.
    Can somebody please guide us as to what could be the best approach to go ahead?
    Thanks in advance.
    Regards,
    Vijay

    Use STSADM
    backup and
    restore to migrate SharePoint 2007 site
    containing  SharePoint Designer Workflows with Info path forms to SharePoint 2013. 

  • Livecycle design 8 with PDF forms

    I need to create a simple PDF form to be hosted in internal web site which can be filled out by the viewer and submitted by email to the designated person with all the info in tact in the same PDF form? The users will only have Adobe Reader. I already have the form in excel 2003 and am running windows XP professionsl 2002.
    Thank you, 

    Reader does not allow a local save of the form and data by default. To be able to add the attachment o an email message a local save must occur. You can Reader Extend your form to allow for this. Open th eform in Acrobat Pro. Under the Advanced menu choose the "Extend Features in Adobe Reader". Follow the wizard and save the result PDF as a different name ....I like to put RE in the name so I know it is Reader Extended. Try the new file.
    paul

  • Design studio with BPC

    Hello Experts,
    How can I connect Design Studio with BPC cubes. I am Using DS version 1.3
    I need to create Design studio with BPC data.
    Gone through some documents but couldn't find out the way to do this.
    Regards,
    LKumar

    Hi Kumar,
    I have implemented a design studio on top of a BPC model by enabling the "use a source of data" in the BPC Admin website - it creates a multiprovider (name /CPMB/xxx) and we can then create bex queries on top of it with some limitations (no currency convertion by eg.).
    Design studio is managing this query as a standard BEX query - Only to display data ...
    Regards,
    Thomas
    PS: Please pay attention with multiple systems as BPC objects can have different names when transported (attributes). All references of BPC objects in Design studio would be then incorrect.

  • SharePoint Designer 2013 with SOAP DataSource and Windows Authentication

    Hello
    What do I need to do to create a SOAP Data Source in SharePoint Designer 2013 with Windows Authentication?
    I am trying to display a list from another subsite in my Web Part Page using a DataView web part.
    My environment is set up to use Claims with Kerberos.
    Is there a guide available that talks about configuring this or troubleshooting this? What should I check?
    Thanks for any suggestions.
    Yoshi

    Hi,
    According to your post, my understanding is that you wanted to create a SOAP Data Source in SharePoint Designer 2013.
    Please refer to the official article related, although it is about the SharePoint Designer 2010, it still works for SharePoint Designer 2013.
    http://office.microsoft.com/en-in/sharepoint-designer-help/add-a-soap-service-as-a-data-source-HA010355752.aspx
    More information:
    http://social.technet.microsoft.com/Forums/en-US/20e34a68-fa78-4450-9744-45f9e3ff26b5/sharepoint-designer-soap-and-rest-datasource-failure?forum=sharepointcustomizationprevious
    http://sharepointdiva.wordpress.com/2012/03/19/create-cross-site-data-source/
    Best Regards,
    Linda Li
    Linda Li
    TechNet Community Support

  • Upgrade from CS 5.5 design premium to CS 6 design standard with purchased DVD and valid serial #s

    Why can't I upgrade from CS 5.5 design premium to CS 6 design standard with purchased DVD and valid serial nos?

    Kkofron please make sure the CS5.5 serial number you are using is also able to be registered under your account at http://www.adobe.com/.  You can find additional details on how to register your Creative Suite 5.5 serial number at Find your serial number quickly - http://helpx.adobe.com/x-productkb/global/find-serial-number.html.

  • Is it possible to turn off CSS and design text with HTML only?

    I'm trying to design text with HTML only ie: Font, Color, Size. The only method I'm aware of is to highlight the text and then go to Insert> Html> Text Objects. That method is tedious and time consuming.
    I'm not looking for arguments or reasons as to why I should use CSS, I'm simply looking for a solution to disable CSS and design with HTML. I'm using Dreamweaver CS5.
    Thank you,
    Paul

    Murphy,
    Thank you for your response. I checked the preferences as you advised and could not find any option to turn off CSS. I don't understand why there doesn't seem to be a straightforward solution to this, it seems so simple. The reason is that the code will be inputed into an eBay listing and therefore CSS code is not supported.
    Any other input for a seemingly simple yet complex solution to this is welcome and appreciated.

  • What is the best design tablet with stylus to have an easy interface with Adobe Photoshop?

           What is best design tablet with stylus for use with Adobe Photoshop for this holiday seasons offering 2013?
    I am trying to find a tablet with good stylus  to work with Adobe Design products primarily Photoshop. I would like one that worked in layers with photoshop.
    The folks a Wacom don't even answer the phone just a recorded message go to the web site with questions.  Not a good sign for a company. So what is a good design tablets for pressure sensitive stylus? Will wacom cintiq tablet interface well with Apple Imac IOS 10.8?
    I love my Samsung note 3 but it will not easily transfer images to apple Imac 10.8.
    Please help me find tablets with good adobe design interface?  Just tell me which way to jump. It is easier to leave Apple for PC or Android  than to abandon Adobe knowledge. The products have to work together.
    Does wacom Cintiq not embrace an easy interface with Apple  Imac IOS 10.8 latest software. Wacom seems to be championing Windows 8 as a companion to their tablet interface.
    Can an Ipad deliver good layered designs using adobe software design programs and a stylus?
    What should I buy for an Adobe design tablet with pressure sensitive stylus for ths Holiday Season? 
    Should i wait until next year?
    Will the tablet work in  Photoshop layers?
    this link seemed ominus
    http://forums.adobe.com/message/4950467

    subhash007 wrote:It's not 802.3ad link aggreagated interface. In the switch side, the ports will be configured as normal access ports and the bonding config will be done on the server side.
    To be honest, I don't understand how the Linux bonding mode can work without anything configured the other end.
    My understanding of 'bonding' comes from Multilink PPP (MLP) where the data stream is chopped up and split across two (or more) circuits. At the other end, a similar MLP-enabled device reforms the data stream from the multiple circuits, maintaining packet order. But this requires MLP-enabled 'bonding' devices at each end.
    Perhaps you could help me better understand the Linux bonding...
    subhash007 wrote:If any single homed server is connected to Switch 2, what will be traffic path for its data packets?Switch 2 ------------------> Switch 1 ----------------------> Active firewall                                   ORSwitch 2 ------------------> Passive Firewall -----------> Active Firewall
    If the firewalls operate in the same fashion as Cisco ASAs, then the inter-firewall link doesn't carry traffic. It's for failover detection and HTTP replication only. But like I said, I'm not familiar with this vendor's products.
    subhash007 wrote:Also will there be any change in traffic path if the trunk between Switch 1 & Switch 2 is converted to L3 routed interface? Since there is no VRRP, i can convert the trunk to L3 right?
    Same as above.

Maybe you are looking for