You will no longer receive e-mail notifications from this forum.
Log-In to post
Kudos to the folks making Toad World great!
Does anyone know if there are plans for Hadoop/HiveServer to leverage Quest Authentication Services for Linux. We cannot connect Toad to Hadoop Hive with Kerberos authentication enabled. Thanks
When I try to map some HBase tables with more column families and with number of rows > 500K, Quest Data Hub service crashes. For some tables data hub works fine. Is there any limitations for Quest Data Hub regarding HBase tables?
I added a HBase table as a data source in the Data Hub in Toad for Cloud Database. Can I do operations like SQL trigger for on the mapped HBase data?
Is there any planned date to support Hiveserver2 with Toad?
I setup Toad for Cloud databases and connected to Hbase. When I map Hbase table columns having integer or Boolean values in it, I get null in query result.
I have two SFDC sandboxes.In the first are custom objects with data. I connected TCD, mapped some custom objects to tables, viewed the data and queried the data. All good.In the second are custom objects with no data. I connected TCD, but when I go to map a custom object, TCD lists only the out-of-the-box attributes for an object. The custom attributes are not listed. Proceed to map the object to a table, and the ootb attributes list, but TCD seems to be unaware of the custom attributes. When I query an attribute I can see using SFDC's Schema Builder, TCD say "Unknown attribute... in 'field list'". Each custom object here says "In development/deployed." Why can't I see the custom attributes for these custom objects of the second sandbox?Thanks for any help with this.
Hi, I am connecting to the Hive Server using TCD eclipse plugin and running HIVE QL commands. One question that i had was how do i run multiple queries in one session? What im trying to do is run commands like:add jar jarname;create temporary function blah;select blah(field) from table where date=date;The above two commands and the select have to run before the third select runs, else the select will fail. Basically all the three commands have to run in one session.How can i do that?
Hello,I use ToadForCloud to view SalesForce data. I wonder how to limit result set (rows or percent) in SQLs issued to SFDC.Thanks in advance for your help.Regards,ToadForCloud
I'm using Cassandra 1.2.x release (latest) and download Toad for Cloud Databases. I'm struggling to have Toad connect to Cassandra. The cluster is not local and I am using the embedded data hub. The embedded data hub is connected and I can connect to SQL Server data source just fine. However, when I tried to connect to Cassandra, I'm getting an error. I can successfully connect to the remote Cassandra cluster using CLI/CQL from my local machine. I can connect to the cluster using Cassandra Cluster Admin tool as well, just not with toad. I'm attaching the error screen. Any help will be greatly appreciated.
Hi all! First of all nice job with Toad for Cloud (currently trying it out for Hive).I have a small problem (i tried doing a search but didn't get any results) and it is that when I map tables in the "Embedded Data Hub" from a Hive schema all the BigInt's or any numerical value get's defaulted to integer type which is causing that column to return NULLS in my SQL queries. I know i could map tables individually and give them the type "Real" but when you have a big Schema with several hundreds of tables this becomes a problem. Is there a way for me to default the numerical datatypes definitions into "Real" datatype or if you guys could consider getting the "decimal" datatype as a datatype?Thanks!
I have a problem with Toad for Cloud returning empty result connected to Hive. It displays the column names, but no rows returned. I just installed it. It seems to be connected successfully, since it shows the tables part of the database. The same queries returns rows when run from SSH. It happens trying different scripts run against different tables. Anyone else experienced the same behavior?
I've been struggling to come up with the right set up to implement sub-tables, as demonstrated in the "Example MongoDB Mappings" -- http://wiki.toadforcloud.com/index.php/Example_MongoDB_MappingsThe doc does explain quite a bit, but what need are some actual screen shots showing how to implement something like "Example #3 Nested Documents".I have a situation that looks a lot like the "MIXED DYNAMIC KEY DOCUMENT VALUE ARRAY VALUE" example. I can't seem to figure out what would go into the id and subtable_id for the various fields. Would it be possible to give us an example of how you would do this, using the current version of Toad for Cloud and mongoDB?Thanks!
Message was edited by: johnlyn763Message was edited by: johnlyn763
Hive .9Toad for Cloud 126.96.36.199connect to hive -> I enter the host IP address (have also tried fully qualified name)port -> default 10000Tried hive version 0.7.1 (as well as the others)job tracker port-> defaultok -> hangs and can't connectkill and get a tde.exeCancel, close with "x" tcd.exe not responding , close the program.Is this suppose to work?I installed squirrel, have some .9 hive jar files in a c:\jdbc_jars directory. I am able to connect through the squirrel software, but would prefer to use Toad if this .9 version is supported or will be supported soon.Thanks
I can connect to hive but trying to query a table with JSON SerDe fails. This is against Amazon EMR with MapR distribution. I'm able to query the table when i run hive from the master node. The connection works and can query a table that has csv row format in Toad but non JSON. Any ideas? Got error 4214 'Hive error: Query returned non-zero code: 12, cause: FAILED: Hive Internal Error: java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe org.openx.data.jsonserde.JsonSerDe does not exist))' from HUB
I have a direct Hive connection to Hiveserver 1 and cannot view non default databases/schemas. What updates are needed?Thanks