Maximum size of the Queue data structure

Hi All,
I would like to know what is the maximum size of the Queue?
Thanks,
Girish G

What does the documentation for the Queue interface say? and the Javadoc for the 11 classes that implement it?

Similar Messages

  • Maximum  size of the image for uploading.

    Dear All,
    I wrote an image upload program for uploading Color images. I need to know the maximum size of the image to be uploaded.
    I am using the class cl_gui_frontend_services=>gui_upload for uploading the image.
    Regards,
    Rahul R

    you will be converting the image into 'TIFF' or 'BMP' to be uploaded through GUI_UPLOAD.
    i think you can do it for large size also, and for any problem you will get the exceptions :
    DP_OUT_OF_MEMORY     : Not Enough Memory in Data Provider
    DISK_FULL                        : Storage Medium full.
    so try and see.

  • How to find the size of the master data objects in the BW system?

    Hi,
    We would like to understand the size of the master data tables overall for a specific source system (Logical name is Ex.ABCD, which supplies all master data to BW). Can you please advise the procedure to find out this?
    Is there any table that will allows to see all the master data objects related to specific sources system so that we can find out that table size? Thanks.
    For example, we have many Texts, Hier & Attributes.
    0COUNTRY_TEXT
    0REGION_TEXT
    0PERSON_ATTR
    0MATERIAL_LPRH_HIER
    Advance Thanks.
    Edited by: BW USA on Feb 2, 2010 10:52 AM

    Hi,
    You can find out all the master data tales used in the BW system by
    Tcode : SE11
    Enter as /BIC/P* for master data tables and press F4 which gives list of master data tables currently used in BW system.
    Similarly for Texts  /BIC/T*  and press F4
    Similarly for Hierarchies /BIC/H-* press F4
    and can know the size of the all tables.
    Thanks,
    Gopal

  • DeserializeJSON - is there a limit on the size of the JSON data that can be converted?

    I have some valid JSON data that's being converted successfully by DeserializeJSON... until it gets to a certain size, or that's certainly what appears to be happening.  The breaking point seems to be somewhere in the neighborhood of 35,000 characters... about 35KB.  When the conversion fails, it fails with a "JSON parsing failure: Unexpected end of JSON string" message.  And when the conversion fails, the JSON data is deemed to be valid per tools like this one:  http://www.freeformatter.com/json-validator.html.
    So, is there a limit on the size of the JSON data that can be converted by DeserializeJSON?
    Thanks!

    Thanks Carl.
    The JSON is being submitted in its entirety, confirmed by Fiddler.  And it's actually being successfully saved to a SQL Server nvarchar(MAX) field too.  I can validate that saved JSON.
    I'm actually grabbing the JSON to convert directly from the SQL Server, and your comments / thoughts led me down the path of resolution.
    Turns out that the JSON was being truncated prior to getting to the DeserializeJSON command, but it was the cfquery pull that was doing the truncating.  The fix was to enable "long text retrieval (CLOB)" for this datasource in CF Admin.  I'd never run into that before or even knew that this setting existed.
    Thanks again for your comments!

  • What is the smallest data structure record in a .TXT file record to be recognized as an Apple Address Book "Data Card"?

    Hello! This is my first time in this discussion group. The question posed is the subject line itself:
    What is the smallest data structure record in a .TXT file record to be recognized as an Apple Address Book "Data Card"?
    I'm lazy! As a math instructor with 40+ students per class per semester (pCpS), I would rather not have to create 40 data cards pCpS by hand, only to expunge that info at semester's end. My college's IS department can easily supply me with First name, Last name, and eMail address info, along with a myriad of other fields. I can manipulate those data on my end to create the necessary .TXT file, but I don't know the essential structure of that file.
    Can you help me?
    Thank you in advance.
    Bill

    Hello Bill, & welcome aboard!
    No idea what  pCpS is, sorry.
    To import a text file into Address Book, it needs to be a comma delimited .csv file, like...
    Customer Name,Company,Address1,Address2,City,State,Zip
    Customer 1,Company 1,2233 W Seventh Street,Unit 543,Seattle,WA,99099
    Customer 2,Company 2,1 Park Avenue,,New York,NY,10001
    Customer 3,Company 3,65 Loma Linda Parkway,,San Jose,CA,94321
    Customer 4,Company 4,89988 E 23rd Street,B720,Oakland,CA,99899
    Customer 5,Company 5,432 1st Avenue,,Seattle,WA,99876
    Customer 6,Company 6,76765 NE 92nd Street,,Seattle,WA,98009
    Customer 7,Company 7,8976 Poplar Street,,Coupeville,WA,98976
    Customer 8,Company 8,7677 4th Ave North,,Seattle,WA ,89876
    Customer 9,Company 9,4556 Fauntleroy Avenue,,West Seattle,WA,98987
    Customer 10,Company 10,4 Bell Street,,Cincinnati,OH,89987
    Customer 11,Company 11,4001 Beacon Ave North,,Seattle,WA,90887
    Customer 12,Company 12,63 Dehli Street,,Noida,India,898877-8879
    Customer 13,Company 13,63 Dehli Street,,Noida,India,898877-8879
    Customer 14,Company 14,63 Dehli Street,,Noida,India,898877-8879
    Customer 15,Company 15,4847 Spirit Lake Drive,,Bellevue,WA,98006
    Customer 16,Company 16,444 Clark Avenue,,West Seattle,WA,88989
    Customer 17,Company 17,6601 E Stallion,,Scottsdale,AZ,85254
    Customer 18,Company 18,801 N 34th Street,,Seattle,WA,98103
    Customer 19,Company 19,15925 SE 92nd,,Newcastle,WA,99898
    Customer 20,Company 20,3335 NW 220th,2nd Floor,Edmonds,WA,99890
    Customer 21,Company 21,444 E Greenway,,Scottsdale,AZ,85654
    Customer 22,Company 22,4 Railroad Drive,,Moclips,WA,98988
    Customer 23,Company 23,89887 E 64th,,Scottsdale,AZ,87877
    Customer 24,Company 24,15620 SE 43rd Street,,Bellevue,WA,98006
    Customer 25,Company 25,123 Smalltown,,Redmond,WA,98998
    Try Address Book Importer...
    http://www.sillybit.com/abee/

  • What is the new data structure for .RTM files in LV8?

    We have software that translates our LV
    program to another language.  Part of that software package reads RTM
    files, decodes them, replaces text, and resaves them.  That code is
    broken for menus that are saved in LV8+, however.  What is the new data
    structure?
    The simple attached zip file has vi's which
    are not ours but serve to illustrate the point.  If it's run on an
    older RTM file, it correctly outputs and XML version of it, on newer
    RTM files it just throws an error.
    -- This is the second time I posted this.  I was curious what the Solution? icon did on somebody elses post and I apparently marked the whole thread as solved (there should really be an undo on that feature btw --
    Attachments:
    RunTime_Menu_to_XML.zip ‏33 KB

    Hi Thomas,
     What error are you getting when running the program?  I was actually able to run it without errors in LV 8.6.  
    There are ways to programmatically read and customize RTM files in LabVIEW.  I hope that the following links will help:
    http://zone.ni.com/reference/en-XX/help/371361B-01/lvhowto/customizing_shortcut_menus_programm/
    http://digital.ni.com/public.nsf/allkb/17803AA31C8C07C986256CFD0080D609?OpenDocument
    Cheers, 
    Marti C
    Applications Engineer
    National Instruments
    NI Medical

  • Maximum size of the SSIS String Variable?

    In SSIS, what is the maximum size of the String variable (one that you would define in the Variables dialog box)?

     jwelch wrote:
    4000 is the limit on expressions. I'm not sure what the limit is on strings.
    This is all true.  Technically, I'm not sure there's a limit on the string variables.  Though you can't really use that variable anywhere without the 4000 character expression limit applying.  (Most of the time the variable is used in an expression SOMEWHERE...)

  • What is the maximum size of the dmp(exp) file?

    Friends,
    OS: RHEL AS 3.0
    DB: Oracle 9iR2
    Daily i am taking the dmp file backup using exp utility.
    Now the file size is 777MB.
    What should be the maximum size of the dmp file.?
    if it crosses 1GB, is it possible to copy the dmp file from production server to test server using putty?
    In future we are planning to upgrade our 9iR2 db to 10gR2 db.
    if i use the export dmp file of 1GB file size for upgrading from 9iR2 to 10gR2, will there be any problem
    Note: I am also taking user managed backup.
    thanks

    Regarding your question about using exp/imp for 9iR2 to 10gR1/2 upgrade, I've used it a lot for the same purpose without any problems, BUT I'm talking about application schemas with only tables/indexes and views. Don't know what would happen with more complex objects...
    If you have concerns about your export dump file size, there is always compression
    Regards
    http://oracledisect.blogspot.com

  • OC4J: marshalling does not recreate the same data structure onthe client

    Hi guys,
    I am trying to use OC4J as an EJB container and have come across the following problem, which looks like a bug.
    I have a value object method that returns an instance of ArrayList with references to other value objects of the same class. The value objects have references to other value objects. When this structure is marshalled across the network, we expect it to be recreated as is but that does not happen and instead objects get duplicated.
    Suppose we have 2 value objects: ValueObject1 and ValueObject2. ValueObject1 references ValueObject2 via its private field and the ValueObject2 references ValueObject1. Both value objects are returned by our method in an ArrayList structure. Here is how it will look like (number after @ represents an address in memory):
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@2
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@1
    We would expect to see the same (except exact addresses) after marshalling. Here is what we get instead:
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@3
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@4
    It can be seen that objects get unnecessarily duplicated – the instance of the ValueObject1 referenced by the ValueObject2 is not the same now as the instance that is referenced by the ArrayList instance.
    This does not only break referential integrity, structure and consistency of the data but dramatically increases the amount of information sent across the network. The problem was discovered when we found that a relatively small but complicated structure that gets serialized into a 142kb file requires about 20Mb of network communication. All this extra info is duplicated object instances.
    I have created a small test case to demonstrate the problem and let you reproduce it.
    Here is RMITestBean.java:
    package com.cramer.test;
    import javax.ejb.EJBObject;
    import java.util.*;
    public interface RMITestBean extends EJBObject
    public ArrayList getSomeData(int testSize) throws java.rmi.RemoteException;
    public byte[] getSomeDataInBytes(int testSize) throws java.rmi.RemoteException;
    Here is RMITestBeanBean.java:
    package com.cramer.test;
    import javax.ejb.SessionBean;
    import javax.ejb.SessionContext;
    import java.util.*;
    public class RMITestBeanBean implements SessionBean
    private SessionContext context;
    SomeVO someVO;
    public void ejbCreate()
    someVO = new SomeVO(0);
    public void ejbActivate()
    public void ejbPassivate()
    public void ejbRemove()
    public void setSessionContext(SessionContext ctx)
    this.context = ctx;
    public byte[] getSomeDataInBytes(int testSize)
    ArrayList someData = getSomeData(testSize);
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println(" serialised output size: "+byteOutputStream.size());
    byte[] bytes = byteOutputStream.toByteArray();
    objectOutputStream.close();
    byteOutputStream.close();
    return bytes;
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return null;
    public ArrayList getSomeData(int testSize)
    // Create array of objects
    ArrayList someData = new ArrayList();
    for (int i=0; i<testSize; i++)
    someData.add(new SomeVO(i));
    // Interlink all the objects
    for (int i=0; i<someData.size()-1; i++)
    for (int j=i+1; j<someData.size(); j++)
    ((SomeVO)someData.get(i)).addValueObject((SomeVO)someData.get(j));
    ((SomeVO)someData.get(j)).addValueObject((SomeVO)someData.get(i));
    // print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Check the serialised size of the structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return someData;
    Here is RMITestBeanHome:
    package com.cramer.test;
    import javax.ejb.EJBHome;
    import java.rmi.RemoteException;
    import javax.ejb.CreateException;
    public interface RMITestBeanHome extends EJBHome
    RMITestBean create() throws RemoteException, CreateException;
    Here is ejb-jar.xml:
    <?xml version = '1.0' encoding = 'windows-1252'?>
    <!DOCTYPE ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans 2.0//EN" "http://java.sun.com/dtd/ejb-jar_2_0.dtd">
    <ejb-jar>
    <enterprise-beans>
    <session>
    <description>Session Bean ( Stateful )</description>
    <display-name>RMITestBean</display-name>
    <ejb-name>RMITestBean</ejb-name>
    <home>com.cramer.test.RMITestBeanHome</home>
    <remote>com.cramer.test.RMITestBean</remote>
    <ejb-class>com.cramer.test.RMITestBeanBean</ejb-class>
    <session-type>Stateful</session-type>
    <transaction-type>Container</transaction-type>
    </session>
    </enterprise-beans>
    </ejb-jar>
    And finally the application that tests the bean:
    package com.cramer.test;
    import java.util.*;
    import javax.rmi.*;
    import javax.naming.*;
    public class RMITestApplication
    final static boolean HARDCODE_SERIALISATION = false;
    final static int TEST_SIZE = 2;
    public static void main(String[] args)
    Hashtable props = new Hashtable();
    props.put(Context.INITIAL_CONTEXT_FACTORY, "com.evermind.server.rmi.RMIInitialContextFactory");
    props.put(Context.PROVIDER_URL, "ormi://lil8m:23792/alexei");
    props.put(Context.SECURITY_PRINCIPAL, "admin");
    props.put(Context.SECURITY_CREDENTIALS, "admin");
    try {
    // Get the JNDI initial context
    InitialContext ctx = new InitialContext(props);
    NamingEnumeration list = ctx.list("comp/env/ejb");
    // Get a reference to the Home Object which we use to create the EJB Object
    Object objJNDI = ctx.lookup("comp/env/ejb/RMITestBean");
    // Now cast it to an InventoryHome object
    RMITestBeanHome testBeanHome = (RMITestBeanHome)PortableRemoteObject.narrow(objJNDI,RMITestBeanHome.class);
    // Create the Inventory remote interface
    RMITestBean testBean = testBeanHome.create();
    ArrayList someData = null;
    if (!HARDCODE_SERIALISATION)
    // ############################### Alternative 1 ##############################
    // ## This relies on marshalling serialisation ##
    someData = testBean.getSomeData(TEST_SIZE);
    // ############################ End of Alternative 1 ##########################
    } else
    // ############################### Alternative 2 ##############################
    // ## This gets a serialised byte stream and de-serialises it ##
    byte[] bytes = testBean.getSomeDataInBytes(TEST_SIZE);
    try {
    java.io.ByteArrayInputStream byteInputStream = new java.io.ByteArrayInputStream(bytes);
    java.io.ObjectInputStream objectInputStream = new java.io.ObjectInputStream(byteInputStream);
    someData = (ArrayList)objectInputStream.readObject();
    objectInputStream.close();
    byteInputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    // ############################ End of Alternative 2 ##########################
    // Print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Print out the size of the serialised structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    catch(Exception ex){
    ex.printStackTrace(System.out);
    The parameters you might be interested in playing with are HARDCODE_SERIALISATION and TEST_SIZE defined at the beginning of RMITestApplication.java. The HARDCODE_SERIALISATION is a flag that specifies whether Java serialisation should be used to pass the data across or we should rely on OC4J marshalling. TEST_SIZE defines the size of the object graph and the ArrayList structure. The bigger this size is the more dramatic effect you get from data duplication.
    The test case outputs the structure both on the server and on the client and prints out the size of the serialised structure. That gives us sufficient comparison, as both structure and its size should be the same on the client and on the server.
    The test case also demonstrates that the problem is specific to OC4J. The standard Java serialisation does not suffer the same flaw. However using the standard serialisation the way I did in the test case code is generally unacceptable as it breaks the transparency benefit and complicates interfaces.
    To run the test case:
    1) Modify provider URL parameter value on line 15 of the RMITestApplication.java for your environment.
    2) Deploy the bean to the server.
    4) Run RMITestApplication on a client PC.
    5) Compare the outputs on the server and on the client.
    I hope someone can reproduce the problem and give their opinion, and possibly point to the solution if there is one at the moment.
    Cheers,
    Alexei

    Hi,
    Eugene, wrong end user recovery.  Alexey is referring to client desktop end user recovery which is entirely different.
    Alexy - As noted in the previous post:
    http://social.technet.microsoft.com/Forums/en-US/bc67c597-4379-4a8d-a5e0-cd4b26c85d91/dpm-2012-still-requires-put-end-users-into-local-admin-groups-for-the-purpose-of-end-user-data?forum=dataprotectionmanager
    Each recovery point has users permisions tied to it, so it's not possible to retroacively give the users permissions.  Implement the below and going forward all users can restore their own files.
    This is a hands off solution to allow all users that use a machine to be able to restore their own files.
     1) Make these two cmd files and save them in c:\temp
     2) Using windows scheduler – schedule addperms.cmd to run daily – any new users that log onto the machine will automatically be able to restore their own files.
    <addperms.cmd>
     Cmd.exe /v /c c:\temp\addreg.cmd
    <addreg.cmd>
     set users=
     echo Windows Registry Editor Version 5.00>c:\temp\perms.reg
     echo [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\ClientProtection]>>c:\temp\perms.reg
     FOR /F "Tokens=*" %%n IN ('dir c:\users\*. /b') do set users=!users!%Userdomain%\\%%n,
     echo "ClientOwners"=^"%users%%Userdomain%\\bogususer^">>c:\temp\perms.reg
     REG IMPORT c:\temp\perms.reg
     Del c:\temp\perms.reg
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Maximum size for document column data

    Good afternoon:
    I am trying to upload a new document to my search index that has a field containing PDF searchable text data at a size of 247 KB. I am getting the following in a 207 response:
    One or more document fields are too large to process.
    What is the maximum size of data for upload? I checked this link (http://msdn.microsoft.com/en-us/library/azure/dn798934.aspx) but I'm not sure it is defined here. Thanks!

    Hi Pablo,
    I recently encountered the same syndrome (using the 2014-07-31-Preview API). I looked at the data we are trying to send, and one of the fields is having about 78 KiB of text. That field is marked as "searchable"
    and "suggestions". Would the presence of "suggestions" imply a similar limit to field size?
    I can see now the "suggestions" attribute has been removed from the latest API. If I switch to the latest, and use only "searchable", I'm assuming the issue should possibly go away?
    Thanks. 

  • Maximum size of Other Forms data file?

    I'm trying to resolve a problem with disappearing data. I'm using Safari 5.0.5 (because newer versions lack required features). Every few days, Safari begins to delete data from the Other Forms file (in username/Library/Safari/Other Forms) and continues deleting data every few minutes until most of the data is gone. After much testing on multiple Macs, it seems that the problem MAY be related to the size of that Other Forms file; I've never seen the file get larger than about 524K, and when it hits that general area, the deletions begin.
    Could you please check your Other Forms file and report two pieces of information here: the version of Safari you are using and the size in KB of your Other Forms file. I'm interested only in Mac experiences (not PC), only Safari 5.0.5 or later, and I'm particularly interested in learning whether anyone has an Other Forms file larger than 524KB.
    Of course, if anyone knows why the data might be disappearing in the first place, that would be helpful, too. (I can restore the date from a backup, but when I do, it will also start to disappear; and I've created new Other Forms files from scratch after resetting everything on Safari, but once the new file reaches about 524KB, it too starts to disappear.)
    Yes, I've repaired permissions and run Disk Utility, Disk Warrior and Tech Tool, to no avail.

    What does the documentation for the Queue interface say? and the Javadoc for the 11 classes that implement it?

  • What is the maximum size of the application to download to iPad via GSM. Is it 20 Mb? Is it possible to change this parameter?

    What is the maximum size of application to download to iPad via GSM. Is it 20 Mb? Does exist any possibility to change this parameter? In the towm where I live there is lack of wifi and the only possibiluty to download something from Apple Store is to use GSM/3G.

    It has partly to do with the carrier.  From what I've summized from reviews & feedback online is since Apple iPhone was the most advanced phone on the market when it was initially released the carriers were concerned about using up bandwidth with the apps & movie downloads so a limit was intially set for 10mb max on 3G connection.  Later apple raised the limit, against the carrier's advice, to 20mb max.
    The interesting part is the limitation doesn't exist on Android phone when they also have large apps & movies which are available for download & they use the same networks the iPhone uses!  So why are the iPhone users still limited to the amount they can download at once while the Android users are not.  They pay the same for thier data plans, if not less in some cases, that iPhone users do!

  • What is the best data structure for loading an enterprise Power BI site?

    Hi folks, I'd sure appreciate some help here!
    I'm a kinda old-fashioned gal and a bit of a traditionalist, building enterprise data warehouses out of Analysis Service hypercubes with a whole raft of MDX for analytics.  Those puppies would sit up and beg when you asked them to deliver up goodies
    to SSRS or PowerView.
    But Power BI is a whole new game for me.  
    Should I be exposing each dimension and fact table in the relational data warehouse as a single Odata feed?  
    Should I be running Data Management Gateway and exposing each table in my RDW individually?
    Should I be flattening my stars and snowflakes and creating a very wide First Normal Form dataset with everything relating to each fact? 
    I guess my real question, folks, is what's the optimum way of exposing data to the Power BI cloud?  
    And my subsidiary question is this:  am I right in saying that all the data management, validation, cleansing, and regular ETTL processes are still required
    before the data is suitable to expose to Power BI?  
    Or, to put it another way, is it not the case that you need to have a clean and properly structured data warehouse
    before the data is ready to be massaged and presented by Power BI? 
    I'd sure value your thoughts and opinions,
    Cheers, Donna
    Donna Kelly

    Dear All,
    My original question was: 
    what's the optimum way of exposing data to the Power BI cloud?
    Having spent the last month faffing about with Power BI – and reading about many people’s experiences using it – I think I can offer a few preliminary conclusions.
    Before I do that, though, let me summarise a few points:
    Melissa said “My initial thoughts:  I would expose each dim & fact as a separate OData feed” and went on to say “one of the hardest things . . . is
    the data modeling piece . . . I think we should try to expose the data in a way that'll help usability . . . which wouldn't be a wide, flat table ”.
    Greg said “data modeling is not a good thing to expose end users to . . . we've had better luck with is building out the data model, and teaching the users
    how to combine pre-built elements”
    I had commented “. . . end users and data modelling don't mix . . . self-service so
    far has been mostly a bust”.
    Here at Redwing, we give out a short White Paper on Business Intelligence Reporting.  It goes to clients and anyone else who wants one.  The heart
    of the Paper is the Reporting Pyramid, which states:  Business intelligence is all about the creation and delivery of actionable intelligence to the right audience at the right time
    For most of the audience, that means Corporate BI: pre-built reports delivered on a schedule.
    For most of the remaining audience, that means parameterised, drillable, and sliceable reporting available via the web, running the gamut from the dashboard to the details, available on
    demand.
    For the relatively few business analysts, that means the ability for business users to create their own semi-customised visual reports when required, to serve
    their audiences.
    For the very few high-power users, that means the ability to interrogate the data warehouse directly, extract the required data, and construct data mining models, spreadsheets and other
    intricate analyses as needed.
    On the subject of self-service, the Redwing view says:  Although many vendors want tot sell self-service reporting tools to the enterprise, the facts of the matter are these:
    v
    80%+ of all enterprise reporting requirement is satisfied by corporate BI . . . if it’s done right.
    v Very few staff members have the time, skills, or inclination to learn and employ self-service business intelligence in the course of their activities.
    I cannot just expose raw data and tell everyone to get on with it.  That way lies madness!
    I think that clean and well-structured data is a prerequisite for delivering business intelligence. 
    Assuming that data is properly integrated, historically accurate and non-volatile as well, then I've just described
    a data warehouse, which is the physical expression of the dimensional model.
    Therefore, exposing the presentation layer of the data warehouse is – in my opinion – the appropriate interface for self-service business intelligence.
    Of course, we can choose to expose perspectives as well, which is functionally identical to building and exposing subject data marts.
    That way, all calculations, KPIs, definitions, and even field names, and all consistent because they all come from the single source of the truth, and not from spreadmart hell.
    So my conclusion is that exposing the presentation layer of the properly modelled data warehouse is – in general - the way to expose data for self-service.
    That’s fine for the general case, but what about Power BI?  Well, it’s important to distinguish between new capabilities in Excel, and the ones in Office 365.
    I think that to all intents and purposes, we’re talking about exposing data through the Data Management Gateway and reading it via Power Query.
    The question boils down to what data structures should go down that pipe. 
    According to
    Create a Data Source and Enable OData Feed in Power BI Admin Center, the possibilities are tables and views.  I guess I could have repeating data in there, so it could be a flattened structure of the kind Melissa doesn’t like (and neither do I). 
    I could expose all the dims and all the facts . . . but that would mean essentially re-building the DW in the PowerPivot DM, and that would be just plain stoopid.  I mean, not a toy system, but a real one with scores of facts and maybe hundreds of dimensions?
    Fact is, I cannot for the life of me see what advantages DMG/PQ
    has over just telling corporate users to go directly to the Cube Perspective they want, that has already all the right calcs, KPIs, security, analytics, field names . . . and most importantly, is already modelled correctly!
    If I’m a real Power User, then I can use PQ on my desktop to pull mashup data from the world, along with all my on-prem data through my exposed Cube presentation layer, and PowerPivot the
    heck out of that to produce all the reporting I’d ever want.  It'd be a zillion times faster reading the data directly from the Cube instead of via the DMG, as well (I think Power BI performance sucks, actually).
    Of course, your enterprise might not
    have a DW, just a heterogeneous mass of dirty unstructured data.  If that’s the case,
    choosing Power BI data structures is the least of your problems!  :-)
    Cheers, Donna
    Donna Kelly

  • Maximum size for the internal hard drive

    I have an Intel based 24" Imac. As soon as my Apple care runs out I want to replace the internal hard drive with something larger. Does anyone have information on the maximum size I can install?? I currently have 320gb, but am looking to increase to 1tb but I don't know if my Mac can handle it.

    1TB will definitely work as this was even a build to order option when it was for sale. I'm pretty sure as long as you're not using FAT32 whatever size you put in should be compatible.
    -SM

  • Can I change the maximum size of the primary device?

    First of all, I love the new Captivate 8 and the new responsive features. I have found designing for the devices to be less complicated then I imagined.   I understand how to adjust the sizes of the three different devices. The primary device has a maximum width of 1280px and a minimum of 813px. Plus, I see I can adjust the tablet and mobile as well. My issue is the previous captivate 7 course that my company designed has course dimensions for 1900x600. I would like to re-make this previous course into a responsive course but how can I design the primary device to have a width of 1900px instead of the 1280px that is allowed.

    I'm assuming there is not an answer for my previous question. Can anyone share with me a little insight on UI for Captivate 8. Why is there a width limit of 1280 pixels for the desktop view?

Maybe you are looking for