Understanding dimensions and issues w/ Bridge

So I'm super new to Illustrator and I'm trying to wrap my head around some basic things and I'm an idiot because I'm just not getting it. What I'm trying to do actually is create a properly sized file to upload to Zazzle, if you're familiar with the site. I downloaded one of their guide files in the 5x7 size to play around with but they don't have one for the posters which is what I want to create. My problem is that while their guide file shows in Adobe Bridge with dimensions as 5x7 inches and 1400x1000 pixels at 200 PPI in the RGB color mode, but a similar file that I created from scratch with inches being the unit shows in Bridge with no value for the inches dimensions, the PPI, or color mode and with a value of pixels at 504x360. Why is there a difference? How the heck do I get 200 PPI? Why are all these values missing from the file properties in Bridge and why are the pixels so small? I've tried it the other way around and created a file using pixels as the units and entered 1400x1000 pixels but once it's created if I switch my Rulers to inches they show the size being almost 20x14 inches - what the? Either way, any document I create is missing those file property values in Bridge.
Here are my settings when creating a file (I choose the Basic RGB profile first then change the dimensions and raster effects):
Now here is how the Zazzle guide file's file properties look in Adobe Bridge:
Then here is how the file properties of the file I created from scratch looks:
It'd be really nice to figure out how to create a file that looks the same file property-wise and dimensions-wise as the Zazzle guide file. Why am I getting this wonky behavior in Bridge? What am I doing wrong?

Thanks Jacob. Let me ask you this, can you create a raster image but it still be in the .AI file format? If so, how?
I think I kind of get what you're saying, but I feel like everyone is telling me I can't create a white elephant even though I'm standing right next to one. Their 5x7 guide file is 1000x1400 pixels and it's an .AI file not exported as whatever - how is that possible?
Here is the link to the recomended sizes for all their products:
http://zazzle.custhelp.com/app/answers/detail/a_id/85/~/recommended-image-sizes-and-resolu tion
Here is a link to the guide files:
http://www.zazzle.com/mk/custom/guidefiles
Here is a link to download my white elephant of a file:
http://asset.zcache.com/assets/graphics/z2/mk/custom/guideFiles/AI/invitation_5x7_vertical .ai
How do I recreate that file?

Similar Messages

  • Workflow issue with Bridge and everything else

    Short and sweet. Well, not really.
    I use windows - so as a result, for some reason unknown to man (after a decade of waiting), I still cannot see "previews" of most, if not all of any adobe software in the windows dialogue boxes.
    think PDF's and the registry hacks we have to do to "enable" previews. But on a mac I can?
    So I have to use bridge - a lot.
    Now when I have a linked image in illustrator (or whatever) and choose "reveal in bridge" it will hunt and drill down through a multitude of folders and display the image in it's native folder - which I can then open in photoshop for instance - edit it - and Illustrator will detect the change and update. Awesome.
    But aside from running a script to send a batch job to a program, it really doesn't "do" anything else that one would expect a "bridge" to do;
         I cannot create a "project" portfolio that would track a client project and all it's related documents.
         Or collaborate with a team.
         I cannot have non destructive edits of a document (such as using an instance of a file like lightroom does) - or "send copy to photoshop" and have Bridge stack the results.
         I cannot combine and tag collections of documents (make a database of stuff).
         I cannot ask Bridge to tell me which documents have placed documents, or (vice versa) which documents are placed into other documents - that would be so cool.
              imagine being able to select a jpeg or psd file and have the option to "Find documents using this file"
              or looking at a "project" and having a list of all first level documents with all their subsequent "placed content" listed all drop down expandable
    So my beef today is that I finally spent an hour messing about thinking I had something "wrong" and went through Bridge CS6 with a fine tooth comb trying to find out if anything had changed.
    and nope - it still doesn't behave as one would think it would.
    Currently, I have some documents that require monthly changes to images, which for the most part came from one photo shoot, and all have essentially the same perspective, appearance, lighting and shading, but all different products and I have a folder of "preformatted images". So monthly I really only need to update the images, and a bit of text. Being posters, InDesign whines a bit for vector art manipulation, so I stick to Illustrator for this job.
    But in Illustrator, I cannot choose another image to replace my linked image - without either;
         a: (reveal in bridge), select new replacement image - place as a new image, which then needs to be manipulated as the "old" image was, "remaking" all the adjustments, and then finally deleting the original linked image.
         b: (reveal in bridge) keep bridge open on the image I need (and it's name), go back to illustrator to open the "relink" windows dialogue and "blindly" browse till I find the image I want as shown in bridge.
         c: simply relink and browse blindly to find a file you can't see a preview of and guess, or drag and drop, or all the other non practical ways to replace the image.
    Option B gives the desired end result where the image I have in a clipping mask, buried in a document in a certain way, with certain effects, is simply replaced without impacting anything else.
         But this takes quite some time when there are a lot of images.
    So Bridge and its promised workflow functionality still isn't living up to it's purpose exactly.
    Bridge isn't bridging.
    With indesign, it's the same issue - you can't "relink" by revealing in bridge, choosing a "replacement" image and "update"
    Maybe a "relink with Bridge" choice in the Links menu?
    and in bridge, since Illustrator triggered the request, a button appears to "Replace" - once you finish browsing and highlighting the replacement image you need, just click it and it's done.
    So two issues -
    Why can't previews of adobe files be shown in windows explorer (still)?
         if they did, I'd really have no use for Bridge.
    Why isn't bridge actually communicating with the apps connecting to it?
    - and yes, I understand that I cannot ask bridge to go find a placed file and replace it with "this" file, but when illustrator "asks" for "this", bridge should be able to "respond" and give "that" since the app initiated the request.
    Bridge should be a workflow tool as advertised - it should literally be able to be a proper workflow hub for all things adobe (and be able to catalogue and track all design file type previews - corel files, cad, 3d, etc)... but as it stands it's just a glorified windows explorer without workflow options.
    Bridge hasn't been altered in years, and it's still lingering around - I think - so that we can simply see thumbnails, rather than just allowing each adobe program to make "previews" windows needs.
    I can change metadata in windows, I can use freeware to change it extensively.
    So aside from the "open in" options and the ability to browse all adobe file types, what's the point?

    I couldn't read all through your post as I am jetting out of here.
    Buy this:
    http://www.fastpictureviewer.com/codecs/
    and install it. It's relatively cheap. Screen shot is from Windows Explorer.
    Handles ID, PSD, etc., files as well. Though the preview uses greeked text for the most part for the ID files.
    Take care, Mike

  • I had an issue running Bridge CS6 on my laptop yesterday and after failing to troubleshoot I un-installed it from my system but have yet to find any way to download/reinstall it. I have web design premium and tried uninstalling/reinstalling photoshop and

    I had an issue running Bridge CS6 on my laptop yesterday and after failing to troubleshoot I un-installed it from my system but have yet to find any way to download/reinstall it. I have web design premium and tried uninstalling/reinstalling photoshop and illutsrator as I'd seen bridge was coupled with those but had no luck. Is it possible to download bridge as a stand alone with a web design premium license for cs6

    restart your design premium installation, select custom install and you should see options to pick which programs you want to install.
    tick bridge and install

  • Cannot find object after creating dimension and cube.

    I need to develop a project using olap api to get olap data.
    My db release is 10.1.0.2, owb is 10.1.0.4.
    After i create dimension and cube in owb, I cannot find any object i have created in owb. Except that in OEM, after i click the button dimension, there comes an message as that the dimension does not have complete metadata or was created by old release and OEM will create a metadata for it. After that, I can find the dimension in AWM and get the imformation using olap api.
    However, there is no measure button in OEM, I cannot find the measures in measure folder. So that i cannot get the information of the MdmMeasureDimension using olap api.
    Can somebody help me with this issue? Thanks!

    With OWB 10.1 and prior releases to get OLAP metadata for a dimension or cube you must have used the OWB transfer wizard to deploy the OLAP metadata. This bridge basically creates and executes a script with all of the CWM calls to create the OLAP dimension and cube metadata.
    When you deployed the dimension from OWB in 10.1 it only created the 'create dimension DDL' for the object.
    Cheers
    David

  • Possible to limit dimensions and measures when creating presentations?

    We are trying to use OLAP/BI Beans to add BI functionality to our next-generation data warehouse application. This application has its own security framework, with the ability to define permissions/privileges for objects. We need to integrate BI Beans/OLAP with this security framework.
    One of the things we need to do is control which OLAP objects (like dimensions and measures) are available to a given user in the Items tab when creating a presentation. For example, user A might see dimensions Alpha, Bravo and measure Charlie, while user B might see dimensions Delta, Echo and measure Foxtrot.
    We need to be able to apply these dimension/measure restrictions without using different Oracle users, with each having access only to their own OLAP objects. Our data warehousing application does not use Oracle and Oracle users to control security; it has its own internal frameworks for privileges/permissions. We therefore need to find a way to restrict access to OLAP objects in some programmatic way.
    Here's an example of how this might work:
    - I am a clinical analyst. I sign on to the data warehouse application. The data warehouse knows that as a clinical analyst, I have access to a certain list of objects and functionality across the application. One of the apps I have privilege to is the BI Bean Presentation Creation Application, so I click the menu to bring this up. I can now create BI presentations, but since I am a clinical analyst my list of available dimensions and measures do not contain any of the G/L, payroll or other financial OLAP objects.
    - If I signed onto the data warehousing application as a different user, one that has a financial analyst role, I might see a different set of OLAP objects when I run the presentation application application.
    So what we need is some API way to specify which dimensions and measures are available to a given user when they launch the presentation wizard. I've been digging through the BI Beans help and javadoc and have found a few things, but they aren't what I need.
    Here's what I found:
    - setItemSearchPath: this allows you to specify which folders are to be displayed. We want control at the OLAP object level, not at the folder level, so this doesn't work for us
    - setVisibleDimensions: this controls which dimensions are available in the Dimensions tab, not which dimensions can be selected in the Items tab. Doesn't work for us
    - setDimensionContext/setMeasureContext: These might work for us but I haven't been able to get them to retrieve anything yet. It also seems to me that these might set which dimensions/members are initially selected in the Items tab, not the list of dims/measures that are available for selection.
    Any assistance on this matter would be greatly appreciated.
    s.l.

    Reply from one of our developers:
    The get/setMeasureContext and get/setDimensionContext methods are currently only used by the Thick CalcBuilder (in a few limited scenarios) and cannot be used "to scope the dimensions and measures listed in Query and Calc builder based on user access rights".
    The scoping of dimensions and measures based on user access rights should be performed at the MetadataManager/Database level.
    This may change going forward as the real issue here is the static nature of the metadata and a general issue with the GRANT option within the database. So from the database perspective it is not possible to grant select priviledges on a single column of a table.
    The metadata issue is more complex as the OLAP API reads the metadata only once on startup of a session. The list of available measures is based on the GRANT priviledge, so for relational OLAP this limits the data scoping capabilities. In 10g, the metadata for AW OLAP becomes more dynamic and contained and read directly from the AW. Therefore, with an AW OLAP implementation with 10g it could be possible to scope boht dimensions and measures quickly and easily.
    Hope this helps
    Business Intelligence Beans Product Management Team
    Oracle Corporation

  • I would appreciate it if someone could advise me as to the optimum resolution, dimensions and dpi for actual photographic slides that I am scanning for use in a Keynote Presentation, that will be projected in a large auditorium.  I realize that most proje

    I would appreciate it if someone could advise me as to the optimum resolution, dimensions and dpi for actual photographic slides that I am scanning for use in a Keynote Presentation, that will be projected in a large auditorium. I realize that most projectors in auditoriums that I will be using have 1024 x 1200 pixels, and possibly 1600 x 1200. There is no reference to this issue in the Keynote Tutorial supplied by Apple, and I have never found a definitive answer to this issue online (although there may be one).
                Here’s my question: When scanning my photographic slides, what setting, from 72 dpi to 300 dpi, would result in the best image quality and use up the most efficient amount of space? 
                Here’s what two different photo slide scanning service suppliers have told me: 
    Supplier No. 1 tells me that they can scan slides to a size of 1544 x 1024 pixels, at 72 dpi, which will be 763 KB, and they refer to this as low resolution (a JPEG). However, I noticed when I looked at these scanned slides, the size of the slides varied, with a maximum of 1.8 MB. This supplier says that the dpi doesn’t matter when it comes to the quality of the final digital image, that it is the dimensions that matter.  They say that if they scanned a slide to a higher resolution (2048 x 3072), they would still scan it at 72 dpi.
    Supplier No. 2: They tell me that in order to have a high quality image made from a photographic slide (starting with a 35 mm slide, in all cases), I need to have a “1280 pixel dimension slide, a JPEG, at 300 dpi, that is 8 MB per image.” However, this supplier also offers, on its list of services, a “Standard Resolution JPEG (4MB file/image – 3088 x 2048), as well as a “High Resolution JPEG (8 MB file/image – 3088x2048).
    I will be presenting my Keynotes with my MacBook Pro, and will not have a chance to try out the presentations in advance, since the lecture location is far from my home, so that is not an option. 
    I do not want to use up more memory than necessary on my laptop.  I also want to have the best quality image. 
    One more question: When scanning images myself, on my own scanner, for my Keynote presentations, would I be better off scanning them as JPEGs or TIFFs? I have been told that a TIFF is better because it is less compressed. 
    Any enlightenment on this subject would be appreciated.
    Thank you.

    When it comes to Keynote, I try and start with a presentation that's 1680 x 1050 preset or something in that range.  Most projectors that you'll get at a conference won't project much higher than that and if they run at a lower resolution, it's better to have the device downsize your Keynote.  Anything is better than having the projector try and upsize your presentation... you work hard to make it look good, and it's mangled by some tired Epson projector.
    As far as slides go, scan them in at 150 dpi or better, and make them at least the dimensions of your presentation.  Keynote is really only wanting 72dpi, but I do them at 150, just in case I need to print out the presentation as a handout later, and having the pix at 150 dpi gives me a little help with their quality on a printer.
    You'd probably have to drop in the 150 versions again if you output the Keynote to .pdf or Word or something, but at least you have the option.
    And Gary's right (above) go ahead and scan them as TIFFs.  Sooner or later you'll want to do something else with these slides (like make something for an iPad or the like) and having them as TIFFs keeps your presentation looking good.
    Finally, and this is a big one, get to the location for your presentation ahead of time if you can, and plug the laptop in and see what you get.  There's always connection problems. Don't let the AV bonehead tell you everything will work just fine ('... I don't have any adapters for a Mac...') .  See it for yourself... you're the one that's standing up there.  Unless it's your boss, then you better be really sure it works.

  • Fact - dimension and Fact Join

    Hi ,
    I have a issue in OBIEE 11g here ,
    I am trying to query two columns from 2 different fact tables F1 and F2
    the join between these tables is F1 - D - F2 , where D is the confirmed dimension.
    so when i am querying for
    f1.c1 , d.c1 , f2.c1 , its throwing me up with error
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 14020] None of the fact tables are compatible with the query request
    Can you please help me with this issue , am i doing the correct join ??, can we query two columns from two diff fact tables?
    Thanks in advance.

    Hi All,
    Thanks for your responses , I have tried creating hierarchy dimension and assgned levels to the fact sources but i still see the error
    and
    Combining 2 facts will resolve the issue but its not the case here as each fact table has about 25 dimensions connected to it , but there are only 2 confirmed dimensions between both facts and the problem here is user wants to see both the fact tables in the same subject area
    I have been trying but unable to get the solution.
    Can you guys please suggested any other way of achiving this task
    Thanks a lot

  • Time Dimensions and Logical dimension "..that not join to any fact source"

    Hi Guys,
    I get the following error on the ANSWERS front end:
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 14026] Unable to navigate requested expression: ToDate(SPECIFICG:[DAggr(RFACT.SPECIFIC [ ] by [ EFact.NO [ ] , Year.ID [ ] ] )], [Level Years]). Please fix the metadata consistency warnings. (HY000)
    SQL Issued: SELECT EFact.NO saw_0, Fact.YTD_E saw_1, EFact.MTD_E saw_2 FROM "E#1" ORDER BY saw_0
    The consistency manager shows no errors and the tables are physically joined up via foreign key i.e.
    fact year monthday
    data - date - date
    year month
    day
    [see screen print|www.metallon.org/test/foreign.jpg]
    I would be thankfull for any suggestions!

    Hello wildmight.
    I followed this tutorial:
    http://www.rittmanmead.com/2007/04/30/obi-ee-time-dimensions-and-time-series-calculations/
    Why should this layout not work?
    Do you have a better idea?
    When you say "it has to be one table" do you mean
    that quartermonthday and year need to be ONE?
    I just rememered that Oracle Hyperion Planning has to have
    time and months seperated.
    Hello mod100.
    What do you mean hierachy over the time dimension?
    Is it not obvious from the screen print that I have a hierachy
    i.e. yeartotal - years and
    quarters - months- days.
    I read that by default the LEVELS are set to the lowerst. so thats fine. I set them by myself to the lowest as well but the ERROR is still the same.
    no, I have not set the levels as in
    http://obiee-tips.blogspot.com/2009/10/logical-levels.html

  • Obiee 11g non-conforming dimensions and nQSError 14025

    Does anyone know how to configure the 11g repository for non-conforming dimensions? This worked fine in 10g. I have upgraded the repository to 11g and it doesn't work anymore. I am receiving the error [nQSError: 14025] No fact table exists at the requested level of detail. I have tried building a simple test subject area and testing different configurations, but I have not had any success.
    Edited by: user10715047 on Jan 27, 2011 12:33 PM

    Ah, I love answering my own questions. :/
    The only way I could make this work was a follows...
    1. Make sure you have a level-based dimension for each of your logical table dimensions (both conforming and non-conforming).
    2. For the fact table measures, set the levels as you did in 10g with the non-conforming dimensions at the Grand Total logical level for each measure.
    3. For the fact table LTSs, set the logical level in the Content tab to the dimension's lowest level for each conforming dimension (leave the non-conforming dimensions level blank).
    Unfortunately, the query generated in 11g will add an additional sub-query to the mix even though it doesn't appear to have any benefit. Therefore in 10g, if you have two logical fact tables with non-conforming dimensions, three sub-queries were required to create the result set. Two queries for the facts and their related dimensions and a final full outer join to stitch the results together. In 11g, you have one query without the measures, two queries with the measures, and the final outer join.
    I am talking to Oracle support about this issue, but I haven't made much progress yet. I asked development to confirm my repository design and they say it checks out. They indicated that the additional query is a design change/enhancement. I am not getting a warm and fuzzy on this one. I'll post back if I make any progress.
    Oh, did I mention that this change has broken queries where I attempt to combine fields from two non-conforming dimensions?!? This worked fine in 10g.

  • Camera RAW update issue with Bridge CS6

    Hi folks,
    I recently installed a routine update for Camera RAW 8.4 and now have a bunch of issues with Bridge including -
    - No previews or ratings showing
    - Ratings filter not working
    - Can't open images directly into Camera RAW without first opening Photoshop (ie - keyboard shortcut not working and File menu command "Open in Camera raw" not available)
    - Can't access or open Camera RAW preferences
    I've tried resetting Bridge to default settings but didn't help and ACR still works fine opening in PS. I've attached two screenshots to show the problem better.
    Anyone else having these problems? Any solutions?

    This should fix it: Camera Raw 8.4 | No metadata or Camera Raw edit

  • Summary Report with 4 Dimensions and 5 Facts

    Hello OBIEE Czars:
    I have a problem.
    I am trying to make a summary report with 4 Dimensions and 5 Facts.
    Out of that only 2 Dimensions are Confirmed.
    Now when I try to bring in all the facts, I get view display error.
    I searched forum for similar issues.
    So I have tried following so far.
    1) Add other dimensions as source and set level to Total for those dims.
    - After doing this, I dont get view display error, but I get blank rows for couple of facts.
    2) I am already aggregating all the measures.
    Please help me out.
    This forum has been very helpful so far.
    Thanks.
    ~Vinay

    Hi
    This method seems fine until you run a request that includes a filter on an unconformed dimension.
    The SQL itself should be quite straightforward (see below) but the BI Server instance does not seem to be joining this data correctly after posting the 2 separate queries to the database.
    SELECT
    x.dim1, x.dim2
    x.fact1_agg_measure
    y.fact2_agg_measure
    FROM
    SELECT
    dim1, dim2
    SUM(fact1.measure) fact1_agg_measure
    FROM
    dim1,
    dim2, --UNCONFORMED+
    fact1
    WHERE
    .... .... join fact1 to dimensions as normal
    AND dim2 = 'BUSINESS_IDENTFIER'
    GROUP BY
    dim1, dim2
    ) x,
    SELECT
    dim1
    SUM(fact2.measure) fact2_agg_measure
    FROM
    dim1,
    fact2
    WHERE
    .... .... join fact2 to dimensions as normal
    GROUP BY
    dim1
    ) y
    WHERE
    x.dim1 = y.dim1
    The result set returned in Answers is firstly a correctly aggregated record plus ALL rows from the second query above ??
    Just wondering if anyone else has come across this issue?
    cheers
    Tony

  • Java PDK Bugs and Issues

    Here are some bugs and issues I've run across using the JPDK that I thought other
    developers should be aware of. The following information comes from using JPDK
    1.1 with Oracle Portal Version 3.0.6.3.3 Early Adopter on Windows 2000.
    1) Do not use a colon character (':') in the String value returned by the method getTitle( Locale l ) in the class Portlet. Registering the provider will appear to succeed, but when you view the Portlet Repository you will get the following error message:
    An Unhandled Exception has occurred. ORA-06502: PL/SQL: numeric or value error:
    character to number conversion error
    Your provider and its portlets will not appear in the Portlet Repository when this error occurs.
    Perhaps other characters will cause this error as well.
    2) The Provider class method initSession() is supposed to propagate the array of returned Cookies back to the browser. The Cookies are never propagated to the browser. This is a huge road-block for our application and we need to have this problem fixed as there is no workaround.
    3) There is a limit to the number of portlets you can have per provider. I initially wrote a provider class that managed 19 portlet classes. However, after registering the provider only 17 portlet classes were loaded by the provider and/or displayed by the Portlet Repository. I had to create a second provider to manage additional portlets. The second provider worked out fine for me because I have 5 portlets that are for "administrator" users only. Moving these portlets left 14 portlets for the first provider to manage.
    Note: I don't know if this error occurs using the provider.xml method of implementing a provider and its portlets. My provider and portlets are implemented directly using the Java class API's.
    4) Sometimes I will receive the error "Meta data missing for portlet ID=<number>" when a portlet is rendered for the first time. This error does not occur often but when the error happens two conditions are met:
    a) The portlet is being rendered for the first time
    b) The HTTP and Web Assistant NT services have recently been started.
    This error is obviously caused by some timeout but increasing the timeout values
    for both the provider and the portlet has no effect. This error may be restricted to the NT platform.
    The following notes are not bugs but issues to be aware of:
    1) Make sure you have the "sessiontimeout" parameter defined when declaring the initArgs of a servlet in the zone.properties file and you intend to register your provider with a "Login Frequency" of "Once per User Session". For example:
    servlet.urlservlet.initArgs=debuglevel=1,sessiontimeout=1800000
    If you leave off the session timeout, Oracle Portal will call your provider's initSession() method for every request constantly generating new a session ID.
    2) Currently there is no means to check whether a ProviderUser has administrative
    privileges. This feature would be extremely helpful for restricting which portlets a user has access to when the provider's getPortlets() method is called.
    3) Currently there is no Java API for storing user and global preferences in the
    Oracle database. The JPDK provides a PersonalizationManager class but the method
    of storing the preferences needs to be implemented by the developer.
    The default Personalization Manager persists user preferences as a file
    to disk. However, this method opens up security holes and hinders scaleability.
    We got around the security and scalability issues by using Oracle's JDBC
    driver to persist user and global preferences to custom tables in the underlying Oracle database.
    I would appreciate hearing from anyone who has run across the cookie propagation issue and has any further insights.
    Thanks...
    Dave White
    null

    David,
    Thank you for your feedback on the JPDK. The information you provide helps us understand how customers are using 9iAS
    Portal and its development kits. I apologize for the delay in getting back with you. Since you are using the Early Adopters
    release, we wanted to test a few of the bugs and issues on the production release of 9iAS Portal.
    1) Using a colon character (:) in the String value returned by the method getTitle(Locale l) returned the ORA-06502 error is a
    known issue. This issue actually occurs within 9iAS Portal and should be resolved in the first maintenance release scheduled
    for 9iAS Portal.
    Waiting on reply from Nilay on #1
    2) The Provider class method initSession() not propagating the array of returned cookies back to the browser is an issue that we are currently working on. This bug has been fixed for most cases in the first maintenance release. A 100% fix of this issue is still being worked on.
    3) The limit to the number of portlets you can have per provider was an issue in the Early Adopter release, but is no longer an issue with 9iAS Portal production. Upgrade to the production release and you should no longer see this problem.
    4) The error "Meta data missing for portlet ID=<number>". I have not seen or heard about others receiving this same message. For this error, can you upgrade to the production version and let me know if you still receive this error message. At that time we can check for differences within the configuration.
    Not bug, but issues......
    1) You have made a good point with the sessiontimeout parameter. The JPDK uses servlet 2.0 APIs which does not provide access to the sessiontimeout. Currently, you will need to specify the sessiontimeout parameter in the zone.properties file.
    2) This is true. Currently there is no means to check whether a ProviderUser has administrative privileges. This is on our features list for future enhancements.
    3) This is also true. The DefaultPortletPersonalizationManager was created as a default runtime for developers not used to writing portlet code. It allows developers to write portlet code without concentrating on the underlying framework. Once a developer becomes more experienced with the JPDK and portlet environment, we encourage them to create their own
    customization manager. This includes changing how the portlet repository is stored or changing how the user customization is
    handled and where it is stored. You have no limitations as long as you follow the guidelines of the PortletPersonalizationManager interface.
    I hope this information helped. Again, we appreciate and welcome this type of feedback, it helps us not only locate bugs and issues, but also helps prioritize our enhancement list.
    Sue

  • Understanding Dimension creation in Oracle 9i

    Hi all
    I created using SQL*Plus command line (CREATE DIMENSION...) this one dimension (3 levels, 1 attribute on bottom level) based on a sample fact table.
    From the fact table I retrieved 4 columns for: 3 level IDs and 1 attribute for the bottom level.
    Now my questions are:
    1. How can I view the dimension data content? I am not sure if this is possible as my understanding suggests that 9i only creates the metadata for a dimension and no data is contained within this object.
    2. The fact table contains multiple rows having the same bottom level ID. How do I know that this data will not get replicated in the Dimension? or is it that Oracle will exclude any duplicate rows when filling in the Dimension? I know it's the latter but need confirmation anyways.
    3. If and when the base fact table gets altered/modified/appended how is it possible to "re-load" the Dimension with new values? Or is it done automatically?
    Please note that our current infrastructure is Oracle 9iR2 on a SOLARIS 9 platform.
    Cheers.

    Si, I really appreciate you taking time out and answering this query of mine.
    OK, so ideally I should have an actual physical Dimension table with additional descriptive columns for each level and then subsquently map the dimension metadata definition to that table. But comparatively, from a tangible performance perspective, what am I missing out on?*
    In all probability going the "physical" way I'll end up using more disk space by integrating millions of unique SCs in the dimensional table. Moreover, I'll have to "merge" most recent new-ish SCNUMBER records (daily) into the physical table first before running any query on the fact MV table, which will definitely incur some amount of time. I do agree that having more descriptive attributes for each level will only help me segment my overall analysis better but that's not important at this time.
    From a solely performance perspective am I doing the right thing by just defining meta-data? What else can I do to speed up analysis on this MV?
    Edited by: oracle_disciple on Jan 27, 2009 9:47 PM

  • Dimension  and project auto Updates for Balancesheet GL

    Dear All,
    I have implemented SAP B1 in our company. In company it has 2 manufacturing units and 3 Trading units, apart from that it also having branches. I have mapped branches through dimension as it requires for revenue generation and to identify branch wise sales purpose only.
    I am not using account segmentation as it disable .
    I need your help in following issues.
    a.) How to update dimension and project for balancesheet Item at document level .( AR/AP).
    b.) I s it okay to change account type of balncesheet gl code from others to expenditure/sales.Is it affects any Financial reports.
    c.) I want to generate seperate balancesheet for Manufacturing and Trading without dimension It is not possible.
    Please guide...
    Regards,
    Harshad Surve

    hi harshad
    if u want to update project or dimesion at documnet level then at each documnet prepared and save in teh sytemn should have a project determining the deal belongs to which dimesion of which location
    and as per this process u would be bale to [prepare seperate balance sheet for all dimesions / locations/ projects codes
    but for that u would require to creat  a seperate balance sheet template for all dimesion/location/projecy and accordingly u would have to map the accounts in balnce sheet
    but for all thhis process u if accounts are mapped per warehouse it would be more effective and easy
    Regards,
    Manish

  • Line item dimensions and cardinality?

    hi all,
    how to identify nor use the cardinality relationship as well as the line item dimension?
    can anyone explain me about it. since i am trying to create an multi provider which enables to fetch data from 3 ods.
    regds
    Hari Chintu

    Hi,
    Below is some useful information on Line item dimension and high cardinality.
    These concepts hold good for Cube design only, coming to ODS, these dont help you. As ODS is nothin but a flat structure, we do not have the facility of Star schema. Reporting on ODS can lead to performance issues, it is better to load data from ODS into Cube and then have a multiprovider on them instead of having it on ODS.
    Hope this helps.
    Regards,
    Kalyan
    Use
    When compared to a fact table, dimensions ideally have a small cardinality.  However, there is an exception to this rule. For example, there are InfoCubes in which a characteristic document is used, in which case almost every entry in the fact table is assigned to a different document. This means that the dimension (or the associated dimension table) has almost as many entries as the fact table itself. We refer here to a degenerated dimension. In BW 2.0, this was also known as a line item dimension, in which case the characteristic responsible for the high cardinality was seen as a line item. Generally, relational and multi-dimensional database systems have problems to efficiently process such dimensions. You can use the indicators line item and high cardinality to execute the following optimizations:
           1.      Line item: This means the dimension contains precisely one characteristic. This means that the system does not create a dimension table. Instead, the SID table of the characteristic takes on the role of dimension table. Removing the dimension table has the following advantages:
    &#61601;        When loading transaction data, no IDs are generated for the entries in the dimension table.  This number range operation can compromise performance precisely in the case where a degenerated dimension is involved. 
    &#61601;        A table- having a very large cardinality- is removed from the star schema.  As a result, the SQL-based queries are simpler. In many cases, the database optimizer can choose better execution plans.
    Nevertheless, it also has a disadvantage: A dimension marked as a line item cannot subsequently include additional characteristics. This is only possible with normal dimensions.
           2.      High cardinality: This means that the dimension is to have a large number of instances (that is, a high cardinality). This information is used to carry out optimizations on a physical level in depending on the database platform. Different index types are used than is normally the case. A general rule is that a dimension has a high cardinality when the number of dimension entries is at least 20% of the fact table entries. If you are unsure, do not select a dimension having high cardinality.
    Note: In SAP BW 3.0, the term line item dimension from SAP BW 2.0 must a) have precisely one characteristic and b) this characteristic must have a high cardinality. Before, the term line item dimension was often only associated with a). Hence the inclusion of this property in the above.  Be aware that a line item dimension has a different meaning now than in SAP BW2.0.
    However, we recommend that you use ODS objects, where possible, instead of InfoCubes for line items. See Creating ODS Objects.
    Activities
    When creating dimensions in the InfoCube maintenance, flag the relevant dimension as a Line Item/ having High Cardinality.

Maybe you are looking for

  • Lost messages in Mail?

    Hello, I just realized my Mail has not sent all the message I wrote this morning between 8am and 11am! people have not received them. The messages are not in the Sent forlder, neither in the Drafts, or Spam, or Trash. They just deseappeared them! one

  • Dialog Box simulation without interrupting program flow

    I'm developing data acquisition/control software and need to give the end user flexibility to change the file to which the acquired data is being saved during operations. In the past, users have been confused with a simple string control that specifi

  • Change name in mail

    How can I change my name in icloud mail?

  • Forall with bulk collect .. getting error

    it's 10 g. gettting this error. drop table t2; create table t2 ( seq_id number,   act number,    is_p varchar2(1),   other varchar2(20) insert into t2 values(1,2,'N','Test 1'); drop table t3; create table t3 ( seq_id number -- ,act number --  ,is_p v

  • How to implement two different websites with one section that has the same content?

    I have two sister websites, each for a separate but related department in a hospital. On each of these websites, I have a main tab called library, which has about 30 pages within it for related healthcare issues. The library is the exact same content